Test Report: KVM_Linux_crio 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35416
                    
                

Test fail (30/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.57
41 TestAddons/parallel/MetricsServer 353.48
54 TestAddons/StoppedEnableDisable 154.2
173 TestMultiControlPlane/serial/StopSecondaryNode 141.85
175 TestMultiControlPlane/serial/RestartSecondaryNode 52.43
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 379.08
180 TestMultiControlPlane/serial/StopCluster 141.91
240 TestMultiNode/serial/RestartKeepsNodes 326.16
242 TestMultiNode/serial/StopMultiNode 141.28
249 TestPreload 352.85
257 TestKubernetesUpgrade 405.49
287 TestPause/serial/SecondStartNoReconfiguration 61.69
294 TestStartStop/group/old-k8s-version/serial/FirstStart 311.9
303 TestStartStop/group/embed-certs/serial/Stop 139.14
304 TestStartStop/group/no-preload/serial/Stop 139.34
307 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 93.58
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
318 TestStartStop/group/old-k8s-version/serial/SecondStart 744.63
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 545.5
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 545.58
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.18
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.43
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 340.7
326 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 528
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 330.78
328 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 104.96
x
+
TestAddons/parallel/Ingress (155.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-018825 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-018825 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-018825 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e717529c-0e3d-45e0-a926-ef718c1b5993] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e717529c-0e3d-45e0-a926-ef718c1b5993] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003927271s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-018825 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.818813302s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-018825 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.100
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-018825 addons disable ingress-dns --alsologtostderr -v=1: (1.090059398s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-018825 addons disable ingress --alsologtostderr -v=1: (7.692039559s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-018825 -n addons-018825
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-018825 logs -n 25: (1.252051852s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-819425                                                                     | download-only-819425 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-944621                                                                     | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-905246                                                                     | download-only-905246 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-819425                                                                     | download-only-819425 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-598622 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC |                     |
	|         | binary-mirror-598622                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-598622                                                                     | binary-mirror-598622 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| addons  | disable dashboard -p                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC |                     |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC |                     |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-018825 --wait=true                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | -p addons-018825                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | -p addons-018825                                                                            |                      |         |         |                     |                     |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-018825 ip                                                                            | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-018825 ssh curl -s                                                                   | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:25 UTC | 19 Jul 24 14:25 UTC |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-018825 ssh cat                                                                       | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:25 UTC | 19 Jul 24 14:25 UTC |
	|         | /opt/local-path-provisioner/pvc-b22e2d8b-ef50-4e0e-ac1c-eda671cc595d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:25 UTC | 19 Jul 24 14:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-018825 addons                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:26 UTC | 19 Jul 24 14:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-018825 addons                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:26 UTC | 19 Jul 24 14:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-018825 ip                                                                            | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:27 UTC | 19 Jul 24 14:27 UTC |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:27 UTC | 19 Jul 24 14:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:27 UTC | 19 Jul 24 14:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:22:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:22:02.276134   12169 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:22:02.276405   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:22:02.276415   12169 out.go:304] Setting ErrFile to fd 2...
	I0719 14:22:02.276419   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:22:02.276587   12169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:22:02.277153   12169 out.go:298] Setting JSON to false
	I0719 14:22:02.278021   12169 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":268,"bootTime":1721398654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:22:02.278079   12169 start.go:139] virtualization: kvm guest
	I0719 14:22:02.279993   12169 out.go:177] * [addons-018825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:22:02.281615   12169 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:22:02.281667   12169 notify.go:220] Checking for updates...
	I0719 14:22:02.283972   12169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:22:02.285155   12169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:22:02.286404   12169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:22:02.287663   12169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:22:02.288966   12169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:22:02.290429   12169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:22:02.323929   12169 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 14:22:02.325226   12169 start.go:297] selected driver: kvm2
	I0719 14:22:02.325253   12169 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:22:02.325265   12169 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:22:02.325974   12169 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:22:02.326043   12169 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:22:02.340475   12169 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:22:02.340533   12169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:22:02.340770   12169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:22:02.340826   12169 cni.go:84] Creating CNI manager for ""
	I0719 14:22:02.340839   12169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:22:02.340848   12169 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 14:22:02.340909   12169 start.go:340] cluster config:
	{Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:22:02.340997   12169 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:22:02.342880   12169 out.go:177] * Starting "addons-018825" primary control-plane node in "addons-018825" cluster
	I0719 14:22:02.344310   12169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:22:02.344349   12169 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:22:02.344357   12169 cache.go:56] Caching tarball of preloaded images
	I0719 14:22:02.344443   12169 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:22:02.344452   12169 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:22:02.344721   12169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/config.json ...
	I0719 14:22:02.344738   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/config.json: {Name:mk2182d403a7be310714d6cedc0644b0c733d792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:02.344868   12169 start.go:360] acquireMachinesLock for addons-018825: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:22:02.344913   12169 start.go:364] duration metric: took 33.673µs to acquireMachinesLock for "addons-018825"
	I0719 14:22:02.344930   12169 start.go:93] Provisioning new machine with config: &{Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:22:02.344975   12169 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 14:22:02.346464   12169 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 14:22:02.346577   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:02.346614   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:02.360713   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0719 14:22:02.361173   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:02.361799   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:02.361816   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:02.362098   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:02.362280   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:02.362422   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:02.362561   12169 start.go:159] libmachine.API.Create for "addons-018825" (driver="kvm2")
	I0719 14:22:02.362589   12169 client.go:168] LocalClient.Create starting
	I0719 14:22:02.362630   12169 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:22:02.540029   12169 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:22:02.643799   12169 main.go:141] libmachine: Running pre-create checks...
	I0719 14:22:02.643824   12169 main.go:141] libmachine: (addons-018825) Calling .PreCreateCheck
	I0719 14:22:02.644334   12169 main.go:141] libmachine: (addons-018825) Calling .GetConfigRaw
	I0719 14:22:02.644824   12169 main.go:141] libmachine: Creating machine...
	I0719 14:22:02.644838   12169 main.go:141] libmachine: (addons-018825) Calling .Create
	I0719 14:22:02.644991   12169 main.go:141] libmachine: (addons-018825) Creating KVM machine...
	I0719 14:22:02.646186   12169 main.go:141] libmachine: (addons-018825) DBG | found existing default KVM network
	I0719 14:22:02.646897   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:02.646768   12191 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0719 14:22:02.646975   12169 main.go:141] libmachine: (addons-018825) DBG | created network xml: 
	I0719 14:22:02.646996   12169 main.go:141] libmachine: (addons-018825) DBG | <network>
	I0719 14:22:02.647004   12169 main.go:141] libmachine: (addons-018825) DBG |   <name>mk-addons-018825</name>
	I0719 14:22:02.647016   12169 main.go:141] libmachine: (addons-018825) DBG |   <dns enable='no'/>
	I0719 14:22:02.647023   12169 main.go:141] libmachine: (addons-018825) DBG |   
	I0719 14:22:02.647030   12169 main.go:141] libmachine: (addons-018825) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 14:22:02.647041   12169 main.go:141] libmachine: (addons-018825) DBG |     <dhcp>
	I0719 14:22:02.647049   12169 main.go:141] libmachine: (addons-018825) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 14:22:02.647059   12169 main.go:141] libmachine: (addons-018825) DBG |     </dhcp>
	I0719 14:22:02.647064   12169 main.go:141] libmachine: (addons-018825) DBG |   </ip>
	I0719 14:22:02.647070   12169 main.go:141] libmachine: (addons-018825) DBG |   
	I0719 14:22:02.647077   12169 main.go:141] libmachine: (addons-018825) DBG | </network>
	I0719 14:22:02.647085   12169 main.go:141] libmachine: (addons-018825) DBG | 
	I0719 14:22:02.652713   12169 main.go:141] libmachine: (addons-018825) DBG | trying to create private KVM network mk-addons-018825 192.168.39.0/24...
	I0719 14:22:02.718503   12169 main.go:141] libmachine: (addons-018825) DBG | private KVM network mk-addons-018825 192.168.39.0/24 created
	I0719 14:22:02.718534   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:02.718467   12191 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:22:02.718553   12169 main.go:141] libmachine: (addons-018825) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825 ...
	I0719 14:22:02.718567   12169 main.go:141] libmachine: (addons-018825) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:22:02.718694   12169 main.go:141] libmachine: (addons-018825) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:22:02.973162   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:02.973048   12191 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa...
	I0719 14:22:03.039480   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:03.039347   12191 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/addons-018825.rawdisk...
	I0719 14:22:03.039517   12169 main.go:141] libmachine: (addons-018825) DBG | Writing magic tar header
	I0719 14:22:03.039596   12169 main.go:141] libmachine: (addons-018825) DBG | Writing SSH key tar header
	I0719 14:22:03.039642   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:03.039507   12191 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825 ...
	I0719 14:22:03.039674   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825 (perms=drwx------)
	I0719 14:22:03.039693   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825
	I0719 14:22:03.039704   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:22:03.039711   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:22:03.039718   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:22:03.039728   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:22:03.039743   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:22:03.039761   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:22:03.039776   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:22:03.039784   12169 main.go:141] libmachine: (addons-018825) Creating domain...
	I0719 14:22:03.039805   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:22:03.039837   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:22:03.039850   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:22:03.039859   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home
	I0719 14:22:03.039872   12169 main.go:141] libmachine: (addons-018825) DBG | Skipping /home - not owner
	I0719 14:22:03.041003   12169 main.go:141] libmachine: (addons-018825) define libvirt domain using xml: 
	I0719 14:22:03.041023   12169 main.go:141] libmachine: (addons-018825) <domain type='kvm'>
	I0719 14:22:03.041033   12169 main.go:141] libmachine: (addons-018825)   <name>addons-018825</name>
	I0719 14:22:03.041040   12169 main.go:141] libmachine: (addons-018825)   <memory unit='MiB'>4000</memory>
	I0719 14:22:03.041060   12169 main.go:141] libmachine: (addons-018825)   <vcpu>2</vcpu>
	I0719 14:22:03.041074   12169 main.go:141] libmachine: (addons-018825)   <features>
	I0719 14:22:03.041098   12169 main.go:141] libmachine: (addons-018825)     <acpi/>
	I0719 14:22:03.041115   12169 main.go:141] libmachine: (addons-018825)     <apic/>
	I0719 14:22:03.041121   12169 main.go:141] libmachine: (addons-018825)     <pae/>
	I0719 14:22:03.041127   12169 main.go:141] libmachine: (addons-018825)     
	I0719 14:22:03.041132   12169 main.go:141] libmachine: (addons-018825)   </features>
	I0719 14:22:03.041140   12169 main.go:141] libmachine: (addons-018825)   <cpu mode='host-passthrough'>
	I0719 14:22:03.041145   12169 main.go:141] libmachine: (addons-018825)   
	I0719 14:22:03.041152   12169 main.go:141] libmachine: (addons-018825)   </cpu>
	I0719 14:22:03.041164   12169 main.go:141] libmachine: (addons-018825)   <os>
	I0719 14:22:03.041174   12169 main.go:141] libmachine: (addons-018825)     <type>hvm</type>
	I0719 14:22:03.041182   12169 main.go:141] libmachine: (addons-018825)     <boot dev='cdrom'/>
	I0719 14:22:03.041197   12169 main.go:141] libmachine: (addons-018825)     <boot dev='hd'/>
	I0719 14:22:03.041207   12169 main.go:141] libmachine: (addons-018825)     <bootmenu enable='no'/>
	I0719 14:22:03.041211   12169 main.go:141] libmachine: (addons-018825)   </os>
	I0719 14:22:03.041217   12169 main.go:141] libmachine: (addons-018825)   <devices>
	I0719 14:22:03.041224   12169 main.go:141] libmachine: (addons-018825)     <disk type='file' device='cdrom'>
	I0719 14:22:03.041232   12169 main.go:141] libmachine: (addons-018825)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/boot2docker.iso'/>
	I0719 14:22:03.041239   12169 main.go:141] libmachine: (addons-018825)       <target dev='hdc' bus='scsi'/>
	I0719 14:22:03.041263   12169 main.go:141] libmachine: (addons-018825)       <readonly/>
	I0719 14:22:03.041282   12169 main.go:141] libmachine: (addons-018825)     </disk>
	I0719 14:22:03.041293   12169 main.go:141] libmachine: (addons-018825)     <disk type='file' device='disk'>
	I0719 14:22:03.041306   12169 main.go:141] libmachine: (addons-018825)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:22:03.041322   12169 main.go:141] libmachine: (addons-018825)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/addons-018825.rawdisk'/>
	I0719 14:22:03.041331   12169 main.go:141] libmachine: (addons-018825)       <target dev='hda' bus='virtio'/>
	I0719 14:22:03.041337   12169 main.go:141] libmachine: (addons-018825)     </disk>
	I0719 14:22:03.041342   12169 main.go:141] libmachine: (addons-018825)     <interface type='network'>
	I0719 14:22:03.041349   12169 main.go:141] libmachine: (addons-018825)       <source network='mk-addons-018825'/>
	I0719 14:22:03.041355   12169 main.go:141] libmachine: (addons-018825)       <model type='virtio'/>
	I0719 14:22:03.041366   12169 main.go:141] libmachine: (addons-018825)     </interface>
	I0719 14:22:03.041385   12169 main.go:141] libmachine: (addons-018825)     <interface type='network'>
	I0719 14:22:03.041401   12169 main.go:141] libmachine: (addons-018825)       <source network='default'/>
	I0719 14:22:03.041412   12169 main.go:141] libmachine: (addons-018825)       <model type='virtio'/>
	I0719 14:22:03.041419   12169 main.go:141] libmachine: (addons-018825)     </interface>
	I0719 14:22:03.041428   12169 main.go:141] libmachine: (addons-018825)     <serial type='pty'>
	I0719 14:22:03.041436   12169 main.go:141] libmachine: (addons-018825)       <target port='0'/>
	I0719 14:22:03.041441   12169 main.go:141] libmachine: (addons-018825)     </serial>
	I0719 14:22:03.041448   12169 main.go:141] libmachine: (addons-018825)     <console type='pty'>
	I0719 14:22:03.041456   12169 main.go:141] libmachine: (addons-018825)       <target type='serial' port='0'/>
	I0719 14:22:03.041470   12169 main.go:141] libmachine: (addons-018825)     </console>
	I0719 14:22:03.041488   12169 main.go:141] libmachine: (addons-018825)     <rng model='virtio'>
	I0719 14:22:03.041499   12169 main.go:141] libmachine: (addons-018825)       <backend model='random'>/dev/random</backend>
	I0719 14:22:03.041510   12169 main.go:141] libmachine: (addons-018825)     </rng>
	I0719 14:22:03.041517   12169 main.go:141] libmachine: (addons-018825)     
	I0719 14:22:03.041525   12169 main.go:141] libmachine: (addons-018825)     
	I0719 14:22:03.041537   12169 main.go:141] libmachine: (addons-018825)   </devices>
	I0719 14:22:03.041543   12169 main.go:141] libmachine: (addons-018825) </domain>
	I0719 14:22:03.041550   12169 main.go:141] libmachine: (addons-018825) 
	I0719 14:22:03.048136   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:ec:9c:95 in network default
	I0719 14:22:03.048626   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:03.048640   12169 main.go:141] libmachine: (addons-018825) Ensuring networks are active...
	I0719 14:22:03.049275   12169 main.go:141] libmachine: (addons-018825) Ensuring network default is active
	I0719 14:22:03.049580   12169 main.go:141] libmachine: (addons-018825) Ensuring network mk-addons-018825 is active
	I0719 14:22:03.050147   12169 main.go:141] libmachine: (addons-018825) Getting domain xml...
	I0719 14:22:03.050961   12169 main.go:141] libmachine: (addons-018825) Creating domain...
	I0719 14:22:04.436146   12169 main.go:141] libmachine: (addons-018825) Waiting to get IP...
	I0719 14:22:04.436961   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:04.437516   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:04.437544   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:04.437467   12191 retry.go:31] will retry after 304.107643ms: waiting for machine to come up
	I0719 14:22:04.743020   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:04.743451   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:04.743479   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:04.743428   12191 retry.go:31] will retry after 286.459263ms: waiting for machine to come up
	I0719 14:22:05.032070   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:05.032577   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:05.032604   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:05.032534   12191 retry.go:31] will retry after 373.323599ms: waiting for machine to come up
	I0719 14:22:05.407334   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:05.407834   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:05.407871   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:05.407780   12191 retry.go:31] will retry after 392.760765ms: waiting for machine to come up
	I0719 14:22:05.802339   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:05.802879   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:05.802907   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:05.802833   12191 retry.go:31] will retry after 514.7879ms: waiting for machine to come up
	I0719 14:22:06.319598   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:06.320043   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:06.320074   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:06.319994   12191 retry.go:31] will retry after 719.918001ms: waiting for machine to come up
	I0719 14:22:07.041925   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:07.042283   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:07.042305   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:07.042222   12191 retry.go:31] will retry after 1.100071039s: waiting for machine to come up
	I0719 14:22:08.144199   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:08.144748   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:08.144777   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:08.144697   12191 retry.go:31] will retry after 914.322914ms: waiting for machine to come up
	I0719 14:22:09.060314   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:09.060804   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:09.060834   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:09.060751   12191 retry.go:31] will retry after 1.190064357s: waiting for machine to come up
	I0719 14:22:10.253077   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:10.253473   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:10.253503   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:10.253435   12191 retry.go:31] will retry after 1.875735266s: waiting for machine to come up
	I0719 14:22:12.131268   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:12.131674   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:12.131703   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:12.131642   12191 retry.go:31] will retry after 2.089554021s: waiting for machine to come up
	I0719 14:22:14.223487   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:14.223948   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:14.223975   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:14.223902   12191 retry.go:31] will retry after 3.555218909s: waiting for machine to come up
	I0719 14:22:17.780236   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:17.780590   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:17.780633   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:17.780578   12191 retry.go:31] will retry after 3.539642936s: waiting for machine to come up
	I0719 14:22:21.324156   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:21.324601   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:21.324629   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:21.324503   12191 retry.go:31] will retry after 4.417103586s: waiting for machine to come up
	I0719 14:22:25.745978   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.746500   12169 main.go:141] libmachine: (addons-018825) Found IP for machine: 192.168.39.100
	I0719 14:22:25.746518   12169 main.go:141] libmachine: (addons-018825) Reserving static IP address...
	I0719 14:22:25.746538   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has current primary IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.746844   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find host DHCP lease matching {name: "addons-018825", mac: "52:54:00:7c:72:1e", ip: "192.168.39.100"} in network mk-addons-018825
	I0719 14:22:25.816418   12169 main.go:141] libmachine: (addons-018825) DBG | Getting to WaitForSSH function...
	I0719 14:22:25.816445   12169 main.go:141] libmachine: (addons-018825) Reserved static IP address: 192.168.39.100
	I0719 14:22:25.816457   12169 main.go:141] libmachine: (addons-018825) Waiting for SSH to be available...
	I0719 14:22:25.819369   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.819751   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:25.819783   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.819913   12169 main.go:141] libmachine: (addons-018825) DBG | Using SSH client type: external
	I0719 14:22:25.819945   12169 main.go:141] libmachine: (addons-018825) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa (-rw-------)
	I0719 14:22:25.819976   12169 main.go:141] libmachine: (addons-018825) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:22:25.819988   12169 main.go:141] libmachine: (addons-018825) DBG | About to run SSH command:
	I0719 14:22:25.820034   12169 main.go:141] libmachine: (addons-018825) DBG | exit 0
	I0719 14:22:25.954287   12169 main.go:141] libmachine: (addons-018825) DBG | SSH cmd err, output: <nil>: 
	I0719 14:22:25.954549   12169 main.go:141] libmachine: (addons-018825) KVM machine creation complete!
	I0719 14:22:25.954850   12169 main.go:141] libmachine: (addons-018825) Calling .GetConfigRaw
	I0719 14:22:25.955507   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:25.955756   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:25.955938   12169 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:22:25.955957   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:25.957182   12169 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:22:25.957197   12169 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:22:25.957205   12169 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:22:25.957215   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:25.959386   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.959683   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:25.959719   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.959861   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:25.960013   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:25.960130   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:25.960244   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:25.960407   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:25.960600   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:25.960612   12169 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:22:26.065254   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:22:26.065278   12169 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:22:26.065284   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.067771   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.068055   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.068082   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.068225   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.068384   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.068502   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.068617   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.068760   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.068960   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.068971   12169 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:22:26.174963   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:22:26.175019   12169 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:22:26.175027   12169 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:22:26.175039   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:26.175263   12169 buildroot.go:166] provisioning hostname "addons-018825"
	I0719 14:22:26.175284   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:26.175460   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.177906   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.178251   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.178278   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.178434   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.178602   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.178737   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.178878   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.179127   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.179284   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.179296   12169 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-018825 && echo "addons-018825" | sudo tee /etc/hostname
	I0719 14:22:26.300586   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-018825
	
	I0719 14:22:26.300615   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.303272   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.303604   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.303624   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.303808   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.303991   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.304154   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.304286   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.304425   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.304609   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.304627   12169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-018825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-018825/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-018825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:22:26.418099   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:22:26.418128   12169 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:22:26.418145   12169 buildroot.go:174] setting up certificates
	I0719 14:22:26.418154   12169 provision.go:84] configureAuth start
	I0719 14:22:26.418161   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:26.418397   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:26.420892   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.421219   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.421248   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.421349   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.424153   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.424424   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.424442   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.424578   12169 provision.go:143] copyHostCerts
	I0719 14:22:26.424644   12169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:22:26.424775   12169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:22:26.424838   12169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:22:26.424887   12169 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.addons-018825 san=[127.0.0.1 192.168.39.100 addons-018825 localhost minikube]
	I0719 14:22:26.518450   12169 provision.go:177] copyRemoteCerts
	I0719 14:22:26.518501   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:22:26.518522   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.521034   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.521374   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.521403   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.521571   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.521823   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.521966   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.522109   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:26.604186   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:22:26.627895   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 14:22:26.650869   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 14:22:26.672910   12169 provision.go:87] duration metric: took 254.74155ms to configureAuth
	I0719 14:22:26.672933   12169 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:22:26.673100   12169 config.go:182] Loaded profile config "addons-018825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:22:26.673166   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.675876   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.676214   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.676241   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.676397   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.676579   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.676742   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.676881   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.677028   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.677173   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.677186   12169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:22:26.950068   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:22:26.950097   12169 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:22:26.950108   12169 main.go:141] libmachine: (addons-018825) Calling .GetURL
	I0719 14:22:26.951314   12169 main.go:141] libmachine: (addons-018825) DBG | Using libvirt version 6000000
	I0719 14:22:26.953393   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.953752   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.953778   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.953932   12169 main.go:141] libmachine: Docker is up and running!
	I0719 14:22:26.953949   12169 main.go:141] libmachine: Reticulating splines...
	I0719 14:22:26.953958   12169 client.go:171] duration metric: took 24.59136072s to LocalClient.Create
	I0719 14:22:26.953987   12169 start.go:167] duration metric: took 24.591425255s to libmachine.API.Create "addons-018825"
	I0719 14:22:26.954000   12169 start.go:293] postStartSetup for "addons-018825" (driver="kvm2")
	I0719 14:22:26.954016   12169 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:22:26.954037   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:26.954279   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:22:26.954302   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.956188   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.956453   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.956478   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.956600   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.956760   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.956908   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.957028   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:27.040706   12169 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:22:27.044709   12169 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:22:27.044729   12169 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:22:27.044808   12169 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:22:27.044832   12169 start.go:296] duration metric: took 90.824275ms for postStartSetup
	I0719 14:22:27.044872   12169 main.go:141] libmachine: (addons-018825) Calling .GetConfigRaw
	I0719 14:22:27.045393   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:27.047621   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.048112   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.048138   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.048376   12169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/config.json ...
	I0719 14:22:27.048538   12169 start.go:128] duration metric: took 24.703554859s to createHost
	I0719 14:22:27.048558   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:27.050721   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.051147   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.051167   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.051300   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:27.051459   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.051690   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.051800   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:27.051927   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:27.052075   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:27.052084   12169 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:22:27.158568   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721398947.135991007
	
	I0719 14:22:27.158585   12169 fix.go:216] guest clock: 1721398947.135991007
	I0719 14:22:27.158596   12169 fix.go:229] Guest: 2024-07-19 14:22:27.135991007 +0000 UTC Remote: 2024-07-19 14:22:27.048547952 +0000 UTC m=+24.805298864 (delta=87.443055ms)
	I0719 14:22:27.158631   12169 fix.go:200] guest clock delta is within tolerance: 87.443055ms
	I0719 14:22:27.158636   12169 start.go:83] releasing machines lock for "addons-018825", held for 24.813714364s
	I0719 14:22:27.158657   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.158888   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:27.161163   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.161493   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.161519   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.161612   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.162042   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.162184   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.162265   12169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:22:27.162317   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:27.162424   12169 ssh_runner.go:195] Run: cat /version.json
	I0719 14:22:27.162445   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:27.164786   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165105   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.165129   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165148   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165251   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:27.165424   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.165544   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:27.165571   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.165589   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165669   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:27.165746   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:27.165892   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.166028   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:27.166188   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:27.242824   12169 ssh_runner.go:195] Run: systemctl --version
	I0719 14:22:27.268669   12169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:22:27.426870   12169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:22:27.432789   12169 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:22:27.432871   12169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:22:27.448727   12169 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:22:27.448752   12169 start.go:495] detecting cgroup driver to use...
	I0719 14:22:27.448820   12169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:22:27.466498   12169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:22:27.480727   12169 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:22:27.480795   12169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:22:27.493929   12169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:22:27.507495   12169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:22:27.631040   12169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:22:27.793562   12169 docker.go:233] disabling docker service ...
	I0719 14:22:27.793617   12169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:22:27.807466   12169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:22:27.820058   12169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:22:27.943877   12169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:22:28.056561   12169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:22:28.071372   12169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:22:28.088856   12169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:22:28.088909   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.098419   12169 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:22:28.098462   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.108134   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.117555   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.127149   12169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:22:28.136926   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.146333   12169 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.162397   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.172300   12169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:22:28.181209   12169 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:22:28.181256   12169 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:22:28.193769   12169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:22:28.202445   12169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:22:28.313937   12169 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:22:28.445058   12169 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:22:28.445151   12169 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:22:28.449616   12169 start.go:563] Will wait 60s for crictl version
	I0719 14:22:28.449681   12169 ssh_runner.go:195] Run: which crictl
	I0719 14:22:28.453266   12169 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:22:28.494903   12169 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:22:28.495018   12169 ssh_runner.go:195] Run: crio --version
	I0719 14:22:28.522659   12169 ssh_runner.go:195] Run: crio --version
	I0719 14:22:28.555215   12169 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:22:28.556409   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:28.559152   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:28.559506   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:28.559531   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:28.559704   12169 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:22:28.563876   12169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:22:28.576553   12169 kubeadm.go:883] updating cluster {Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 14:22:28.576646   12169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:22:28.576697   12169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:22:28.608090   12169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 14:22:28.608153   12169 ssh_runner.go:195] Run: which lz4
	I0719 14:22:28.612183   12169 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 14:22:28.616199   12169 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 14:22:28.616225   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 14:22:29.921589   12169 crio.go:462] duration metric: took 1.309435123s to copy over tarball
	I0719 14:22:29.921652   12169 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 14:22:32.195571   12169 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273893497s)
	I0719 14:22:32.195599   12169 crio.go:469] duration metric: took 2.273983793s to extract the tarball
	I0719 14:22:32.195607   12169 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 14:22:32.239714   12169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:22:32.281809   12169 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:22:32.281836   12169 cache_images.go:84] Images are preloaded, skipping loading
	I0719 14:22:32.281846   12169 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.3 crio true true} ...
	I0719 14:22:32.281983   12169 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-018825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:22:32.282081   12169 ssh_runner.go:195] Run: crio config
	I0719 14:22:32.330345   12169 cni.go:84] Creating CNI manager for ""
	I0719 14:22:32.330366   12169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:22:32.330374   12169 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 14:22:32.330395   12169 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-018825 NodeName:addons-018825 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 14:22:32.330525   12169 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-018825"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 14:22:32.330578   12169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:22:32.340981   12169 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 14:22:32.341054   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 14:22:32.350979   12169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 14:22:32.367492   12169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:22:32.383302   12169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 14:22:32.398795   12169 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0719 14:22:32.402669   12169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:22:32.414420   12169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:22:32.530591   12169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:22:32.546502   12169 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825 for IP: 192.168.39.100
	I0719 14:22:32.546530   12169 certs.go:194] generating shared ca certs ...
	I0719 14:22:32.546549   12169 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.546711   12169 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:22:32.662183   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt ...
	I0719 14:22:32.662214   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt: {Name:mk653b526ac38e1c5aaf4a69315f128eb630d254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.662416   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key ...
	I0719 14:22:32.662432   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key: {Name:mkfbbf0641db43c54a468a53e399a0eeead570f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.662515   12169 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:22:32.776929   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt ...
	I0719 14:22:32.776958   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt: {Name:mke9f53eb45f4a92a42e018c67b56e0843ac5842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.777118   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key ...
	I0719 14:22:32.777129   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key: {Name:mk435a7f64e6da5753d93a1289177a6967581df2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.777191   12169 certs.go:256] generating profile certs ...
	I0719 14:22:32.777240   12169 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.key
	I0719 14:22:32.777252   12169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt with IP's: []
	I0719 14:22:32.948611   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt ...
	I0719 14:22:32.948640   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: {Name:mk7b86b310f3139a7b89f9bc57d7c3ff3235d404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.948799   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.key ...
	I0719 14:22:32.948809   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.key: {Name:mk707486ecbbd323681cae7b1b167fb9317eaad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.948879   12169 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6
	I0719 14:22:32.948897   12169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100]
	I0719 14:22:33.095901   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6 ...
	I0719 14:22:33.095935   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6: {Name:mk2b86f4561e2ea5008488d825bc65cd1db25651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.096100   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6 ...
	I0719 14:22:33.096112   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6: {Name:mk4ffd02ebd6cf9d73ea940f7afe827800275b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.096174   12169 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt
	I0719 14:22:33.096240   12169 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key
	I0719 14:22:33.096283   12169 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key
	I0719 14:22:33.096299   12169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt with IP's: []
	I0719 14:22:33.144582   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt ...
	I0719 14:22:33.144609   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt: {Name:mk72c8b68e207a2c3fed34285c51d2c5714b3abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.144755   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key ...
	I0719 14:22:33.144764   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key: {Name:mkb139ba4f197ec9147cde88399a4eead3eb1739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.144914   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:22:33.144944   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:22:33.144967   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:22:33.144989   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:22:33.145492   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:22:33.170011   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:22:33.193786   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:22:33.217180   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:22:33.243689   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 14:22:33.269669   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 14:22:33.294872   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:22:33.319727   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 14:22:33.342025   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:22:33.364374   12169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 14:22:33.380540   12169 ssh_runner.go:195] Run: openssl version
	I0719 14:22:33.386677   12169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:22:33.396863   12169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:22:33.401218   12169 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:22:33.401263   12169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:22:33.407260   12169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:22:33.418412   12169 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:22:33.422328   12169 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:22:33.422383   12169 kubeadm.go:392] StartCluster: {Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:22:33.422445   12169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 14:22:33.422493   12169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 14:22:33.458732   12169 cri.go:89] found id: ""
	I0719 14:22:33.458803   12169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 14:22:33.469094   12169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 14:22:33.479138   12169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 14:22:33.488811   12169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 14:22:33.488827   12169 kubeadm.go:157] found existing configuration files:
	
	I0719 14:22:33.488868   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 14:22:33.498216   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 14:22:33.498267   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 14:22:33.507521   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 14:22:33.517258   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 14:22:33.517311   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 14:22:33.527026   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 14:22:33.537692   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 14:22:33.537752   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 14:22:33.549407   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 14:22:33.560268   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 14:22:33.560320   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 14:22:33.571360   12169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 14:22:33.759969   12169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 14:22:43.594441   12169 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 14:22:43.594535   12169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 14:22:43.594648   12169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 14:22:43.594780   12169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 14:22:43.594902   12169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 14:22:43.594993   12169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 14:22:43.596715   12169 out.go:204]   - Generating certificates and keys ...
	I0719 14:22:43.596836   12169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 14:22:43.596920   12169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 14:22:43.597021   12169 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 14:22:43.597081   12169 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 14:22:43.597152   12169 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 14:22:43.597204   12169 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 14:22:43.597250   12169 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 14:22:43.597349   12169 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-018825 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0719 14:22:43.597397   12169 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 14:22:43.597521   12169 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-018825 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0719 14:22:43.597619   12169 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 14:22:43.597714   12169 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 14:22:43.597777   12169 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 14:22:43.597887   12169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 14:22:43.597956   12169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 14:22:43.598032   12169 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 14:22:43.598115   12169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 14:22:43.598211   12169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 14:22:43.598288   12169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 14:22:43.598372   12169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 14:22:43.598461   12169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 14:22:43.599694   12169 out.go:204]   - Booting up control plane ...
	I0719 14:22:43.599790   12169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 14:22:43.599910   12169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 14:22:43.599996   12169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 14:22:43.600103   12169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 14:22:43.600173   12169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 14:22:43.600206   12169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 14:22:43.600343   12169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 14:22:43.600445   12169 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 14:22:43.600500   12169 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.204727ms
	I0719 14:22:43.600561   12169 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 14:22:43.600621   12169 kubeadm.go:310] [api-check] The API server is healthy after 5.00122684s
	I0719 14:22:43.600713   12169 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 14:22:43.600819   12169 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 14:22:43.600872   12169 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 14:22:43.601052   12169 kubeadm.go:310] [mark-control-plane] Marking the node addons-018825 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 14:22:43.601103   12169 kubeadm.go:310] [bootstrap-token] Using token: eraloe.nxrwbbfvsota337c
	I0719 14:22:43.602405   12169 out.go:204]   - Configuring RBAC rules ...
	I0719 14:22:43.602489   12169 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 14:22:43.602566   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 14:22:43.602721   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 14:22:43.602890   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 14:22:43.602992   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 14:22:43.603064   12169 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 14:22:43.603168   12169 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 14:22:43.603215   12169 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 14:22:43.603262   12169 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 14:22:43.603268   12169 kubeadm.go:310] 
	I0719 14:22:43.603331   12169 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 14:22:43.603341   12169 kubeadm.go:310] 
	I0719 14:22:43.603434   12169 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 14:22:43.603444   12169 kubeadm.go:310] 
	I0719 14:22:43.603483   12169 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 14:22:43.603533   12169 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 14:22:43.603606   12169 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 14:22:43.603615   12169 kubeadm.go:310] 
	I0719 14:22:43.603675   12169 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 14:22:43.603682   12169 kubeadm.go:310] 
	I0719 14:22:43.603738   12169 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 14:22:43.603747   12169 kubeadm.go:310] 
	I0719 14:22:43.603811   12169 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 14:22:43.603899   12169 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 14:22:43.603978   12169 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 14:22:43.603985   12169 kubeadm.go:310] 
	I0719 14:22:43.604087   12169 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 14:22:43.604196   12169 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 14:22:43.604214   12169 kubeadm.go:310] 
	I0719 14:22:43.604343   12169 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eraloe.nxrwbbfvsota337c \
	I0719 14:22:43.604482   12169 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 14:22:43.604511   12169 kubeadm.go:310] 	--control-plane 
	I0719 14:22:43.604517   12169 kubeadm.go:310] 
	I0719 14:22:43.604600   12169 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 14:22:43.604606   12169 kubeadm.go:310] 
	I0719 14:22:43.604672   12169 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eraloe.nxrwbbfvsota337c \
	I0719 14:22:43.604767   12169 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 14:22:43.604779   12169 cni.go:84] Creating CNI manager for ""
	I0719 14:22:43.604788   12169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:22:43.606319   12169 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 14:22:43.607351   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 14:22:43.618848   12169 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 14:22:43.637485   12169 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 14:22:43.637587   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:43.637592   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-018825 minikube.k8s.io/updated_at=2024_07_19T14_22_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=addons-018825 minikube.k8s.io/primary=true
	I0719 14:22:43.666008   12169 ops.go:34] apiserver oom_adj: -16
	I0719 14:22:43.755568   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:44.256041   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:44.756047   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:45.255900   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:45.756223   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:46.255574   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:46.756020   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:47.256563   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:47.755712   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:48.256408   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:48.755625   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:49.256184   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:49.755626   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:50.255898   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:50.756456   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:51.256129   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:51.756230   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:52.255832   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:52.755705   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:53.256233   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:53.755564   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:54.256459   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:54.755707   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:55.255816   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:55.756453   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:56.256337   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:56.755764   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:56.861922   12169 kubeadm.go:1113] duration metric: took 13.224402674s to wait for elevateKubeSystemPrivileges
	I0719 14:22:56.861980   12169 kubeadm.go:394] duration metric: took 23.439599918s to StartCluster
	I0719 14:22:56.862008   12169 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:56.862149   12169 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:22:56.862640   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:56.862879   12169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 14:22:56.862891   12169 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 14:22:56.862975   12169 addons.go:69] Setting yakd=true in profile "addons-018825"
	I0719 14:22:56.862869   12169 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:22:56.863049   12169 addons.go:69] Setting registry=true in profile "addons-018825"
	I0719 14:22:56.863168   12169 addons.go:234] Setting addon registry=true in "addons-018825"
	I0719 14:22:56.863206   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863051   12169 addons.go:69] Setting ingress-dns=true in profile "addons-018825"
	I0719 14:22:56.863299   12169 addons.go:234] Setting addon ingress-dns=true in "addons-018825"
	I0719 14:22:56.863353   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863019   12169 addons.go:69] Setting inspektor-gadget=true in profile "addons-018825"
	I0719 14:22:56.863494   12169 addons.go:234] Setting addon inspektor-gadget=true in "addons-018825"
	I0719 14:22:56.863536   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863030   12169 addons.go:69] Setting metrics-server=true in profile "addons-018825"
	I0719 14:22:56.863595   12169 addons.go:234] Setting addon metrics-server=true in "addons-018825"
	I0719 14:22:56.863640   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863031   12169 addons.go:69] Setting helm-tiller=true in profile "addons-018825"
	I0719 14:22:56.863681   12169 addons.go:234] Setting addon helm-tiller=true in "addons-018825"
	I0719 14:22:56.863701   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.863720   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863041   12169 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-018825"
	I0719 14:22:56.863752   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.863775   12169 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-018825"
	I0719 14:22:56.863805   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.863042   12169 addons.go:69] Setting ingress=true in profile "addons-018825"
	I0719 14:22:56.863849   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.863865   12169 addons.go:234] Setting addon ingress=true in "addons-018825"
	I0719 14:22:56.863897   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863929   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.863979   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.863052   12169 addons.go:69] Setting gcp-auth=true in profile "addons-018825"
	I0719 14:22:56.864074   12169 mustload.go:65] Loading cluster: addons-018825
	I0719 14:22:56.863062   12169 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-018825"
	I0719 14:22:56.864092   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864106   12169 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-018825"
	I0719 14:22:56.863064   12169 addons.go:69] Setting default-storageclass=true in profile "addons-018825"
	I0719 14:22:56.864126   12169 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-018825"
	I0719 14:22:56.863072   12169 addons.go:69] Setting volcano=true in profile "addons-018825"
	I0719 14:22:56.864133   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864155   12169 addons.go:234] Setting addon volcano=true in "addons-018825"
	I0719 14:22:56.863077   12169 addons.go:69] Setting volumesnapshots=true in profile "addons-018825"
	I0719 14:22:56.864174   12169 addons.go:234] Setting addon volumesnapshots=true in "addons-018825"
	I0719 14:22:56.863078   12169 addons.go:69] Setting cloud-spanner=true in profile "addons-018825"
	I0719 14:22:56.864195   12169 addons.go:234] Setting addon cloud-spanner=true in "addons-018825"
	I0719 14:22:56.863021   12169 addons.go:234] Setting addon yakd=true in "addons-018825"
	I0719 14:22:56.863075   12169 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-018825"
	I0719 14:22:56.864239   12169 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-018825"
	I0719 14:22:56.863092   12169 config.go:182] Loaded profile config "addons-018825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:22:56.863089   12169 addons.go:69] Setting storage-provisioner=true in profile "addons-018825"
	I0719 14:22:56.864265   12169 addons.go:234] Setting addon storage-provisioner=true in "addons-018825"
	I0719 14:22:56.864277   12169 config.go:182] Loaded profile config "addons-018825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:22:56.864517   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864587   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864596   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864606   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864625   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864690   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864846   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864874   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864872   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864903   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864519   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864982   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864692   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865065   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865319   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865343   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865353   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.865410   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.865524   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865600   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865661   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865685   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865562   12169 out.go:177] * Verifying Kubernetes components...
	I0719 14:22:56.865757   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865959   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.866318   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.867361   12169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:22:56.884345   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34869
	I0719 14:22:56.884879   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.885266   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0719 14:22:56.885327   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0719 14:22:56.885551   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.885589   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.885861   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.886405   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.886425   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.886494   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.886849   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.886909   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.887286   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.887420   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.887434   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.887500   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.887524   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.887885   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.888365   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.888390   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.888615   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0719 14:22:56.888986   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.889459   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.889476   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.889911   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.890459   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.890487   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.895482   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0719 14:22:56.895499   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0719 14:22:56.898759   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.898802   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.898846   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.899457   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.899486   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.899890   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.900624   12169 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-018825"
	I0719 14:22:56.900674   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.900637   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.900747   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.901023   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.901072   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.901822   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.901860   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.912801   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.913113   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.913146   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.913490   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.913514   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.913897   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.914073   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.916225   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.916617   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.916660   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.935749   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0719 14:22:56.938230   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0719 14:22:56.938372   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0719 14:22:56.938449   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I0719 14:22:56.939040   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.939248   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.939351   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.939621   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.939634   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.939754   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.939767   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.939872   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.939880   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.939934   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.940308   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.940446   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.940457   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.940511   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.940947   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.940975   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.941174   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.941235   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0719 14:22:56.941381   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.941404   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.941573   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.942269   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.942451   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.942682   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.942705   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.943036   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.943286   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.943482   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.943966   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.944731   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.944802   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0719 14:22:56.945291   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.945893   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.946056   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 14:22:56.946084   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.946096   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.946255   12169 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 14:22:56.946450   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.947372   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.947436   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.947485   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 14:22:56.947502   12169 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 14:22:56.947521   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.947634   12169 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 14:22:56.947705   12169 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 14:22:56.948863   12169 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 14:22:56.948880   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 14:22:56.948897   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.948973   12169 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 14:22:56.949114   12169 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 14:22:56.949124   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 14:22:56.949138   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.950420   12169 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 14:22:56.950434   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 14:22:56.950449   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.950736   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0719 14:22:56.951539   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.952333   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.952349   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.952598   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.952682   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.953031   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.953064   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.953241   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.953243   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.953450   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.953766   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.953942   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.956646   12169 addons.go:234] Setting addon default-storageclass=true in "addons-018825"
	I0719 14:22:56.956690   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.957044   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.957063   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.957251   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.957425   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.958370   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.958707   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.958726   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.959103   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.959119   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.959363   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.959553   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.959607   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.959751   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.959805   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.959818   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.959912   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.959942   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.959968   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.960114   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.960152   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.960241   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.960280   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.960440   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.965008   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0719 14:22:56.965405   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.965995   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.966014   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.966401   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.966661   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.968344   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.969783   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0719 14:22:56.970137   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.970352   12169 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 14:22:56.970805   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.970824   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.971738   12169 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 14:22:56.971757   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 14:22:56.971774   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.971923   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.972110   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.974026   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0719 14:22:56.974508   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0719 14:22:56.974655   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.975270   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.975351   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.975572   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.975839   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.975858   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.975872   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:22:56.975880   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:22:56.976121   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:22:56.976145   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:22:56.976154   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.976158   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:22:56.976168   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:22:56.976176   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:22:56.976215   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.976229   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.976325   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.976416   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0719 14:22:56.976490   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.976628   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.977130   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.977175   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.977190   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.977711   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.977735   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.977963   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:22:56.977984   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:22:56.978005   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	W0719 14:22:56.978080   12169 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 14:22:56.978381   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.978911   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.978938   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.978964   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 14:22:56.979481   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.979608   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.980052   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.980068   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.980239   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.980262   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.980549   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.980597   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.981095   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.981139   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.981330   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I0719 14:22:56.981813   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.981839   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.982487   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.982760   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0719 14:22:56.983056   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.983073   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.983135   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.983203   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0719 14:22:56.984245   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.984269   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.984280   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.984497   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.984576   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.984801   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.984842   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.985110   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.985148   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.986705   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.986732   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.990714   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.991339   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.991385   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.993109   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0719 14:22:56.995644   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.996106   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.996124   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.996548   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.996794   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.998149   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0719 14:22:56.998709   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.999126   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.999146   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.999386   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.999766   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.999798   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:57.004765   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0719 14:22:57.005205   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.005745   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.005767   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.005823   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0719 14:22:57.006169   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.006658   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.006681   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.006800   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.007057   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.007180   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.007230   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.009413   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.009480   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.011403   12169 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 14:22:57.011456   12169 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 14:22:57.012357   12169 out.go:177]   - Using image docker.io/busybox:stable
	I0719 14:22:57.012538   12169 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 14:22:57.012550   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 14:22:57.012569   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.014012   12169 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 14:22:57.014028   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 14:22:57.014043   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.016496   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I0719 14:22:57.016912   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.016991   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.017546   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.017563   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.018207   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.018399   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.018417   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.018555   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.019244   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.019305   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0719 14:22:57.019429   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.019594   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.019733   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.019790   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.019808   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.019931   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.020174   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.020328   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.020477   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.020590   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.021351   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0719 14:22:57.021950   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.022366   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.025247   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0719 14:22:57.025826   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.025912   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.025915   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.025931   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.026345   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.026360   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.026401   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.026829   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:57.026855   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:57.027086   12169 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 14:22:57.027295   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0719 14:22:57.027089   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.027423   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.027411   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.027744   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.027826   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.027886   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.028064   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.028459   12169 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:22:57.028476   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 14:22:57.028492   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.029189   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.029207   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.029189   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0719 14:22:57.029628   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.029887   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.030098   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.030115   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.030167   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.030969   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.031139   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.032270   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.033014   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.033052   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.033500   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.033912   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0719 14:22:57.033920   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.033949   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.034128   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.034174   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.034396   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.034461   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.034623   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.034746   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.034749   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.034767   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.034890   12169 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 14:22:57.034955   12169 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 14:22:57.035048   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.035568   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.035751   12169 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 14:22:57.035753   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 14:22:57.036533   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 14:22:57.036551   12169 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 14:22:57.036569   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.036626   12169 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 14:22:57.036636   12169 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 14:22:57.036648   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.037057   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.037683   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 14:22:57.037701   12169 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 14:22:57.037719   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.038558   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 14:22:57.039747   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 14:22:57.040468   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.040498   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.040894   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 14:22:57.041039   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.041060   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.041145   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.041163   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.041747   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.041784   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.041943   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.041998   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.042107   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.042150   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 14:22:57.042287   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.042326   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.042336   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.042346   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.042499   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.042585   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.042890   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.043060   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.043255   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.043496   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.044737   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 14:22:57.044803   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 14:22:57.045867   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 14:22:57.046028   12169 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 14:22:57.046042   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 14:22:57.046056   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.048006   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 14:22:57.049056   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.049076   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 14:22:57.049412   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.049430   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.049602   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.049758   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.049870   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.050099   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.050995   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 14:22:57.052192   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 14:22:57.052209   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 14:22:57.052226   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.052286   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0719 14:22:57.052636   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.053253   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.053278   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.053576   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.053815   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.055614   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.055630   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.055840   12169 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 14:22:57.055855   12169 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 14:22:57.055870   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.055981   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.056036   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.056128   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.056294   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.056455   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	W0719 14:22:57.056608   12169 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53052->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.056633   12169 retry.go:31] will retry after 351.6847ms: ssh: handshake failed: read tcp 192.168.39.1:53052->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.056667   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.058375   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.058744   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.058776   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.058879   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.059039   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.059152   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.059259   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	W0719 14:22:57.091213   12169 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53062->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.091251   12169 retry.go:31] will retry after 357.490865ms: ssh: handshake failed: read tcp 192.168.39.1:53062->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.374796   12169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:22:57.374821   12169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 14:22:57.430996   12169 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 14:22:57.431024   12169 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 14:22:57.522222   12169 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 14:22:57.522264   12169 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 14:22:57.527111   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 14:22:57.527129   12169 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 14:22:57.533144   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 14:22:57.559975   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 14:22:57.562578   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 14:22:57.565842   12169 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 14:22:57.565866   12169 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 14:22:57.604073   12169 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 14:22:57.604098   12169 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 14:22:57.640449   12169 node_ready.go:35] waiting up to 6m0s for node "addons-018825" to be "Ready" ...
	I0719 14:22:57.643361   12169 node_ready.go:49] node "addons-018825" has status "Ready":"True"
	I0719 14:22:57.643379   12169 node_ready.go:38] duration metric: took 2.907219ms for node "addons-018825" to be "Ready" ...
	I0719 14:22:57.643386   12169 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:22:57.658390   12169 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace to be "Ready" ...
	I0719 14:22:57.704488   12169 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 14:22:57.704518   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 14:22:57.706206   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 14:22:57.706223   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 14:22:57.708594   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 14:22:57.711566   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:22:57.736581   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 14:22:57.736613   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 14:22:57.760172   12169 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 14:22:57.760197   12169 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 14:22:57.787931   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 14:22:57.787959   12169 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 14:22:57.896690   12169 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 14:22:57.896716   12169 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 14:22:57.903872   12169 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 14:22:57.903901   12169 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 14:22:57.915405   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 14:22:57.915430   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 14:22:57.988334   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 14:22:58.039939   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 14:22:58.087755   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 14:22:58.087787   12169 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 14:22:58.087754   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 14:22:58.087829   12169 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 14:22:58.106668   12169 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 14:22:58.106694   12169 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 14:22:58.110836   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 14:22:58.126079   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 14:22:58.134336   12169 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 14:22:58.134359   12169 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 14:22:58.171247   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 14:22:58.171272   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 14:22:58.249777   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 14:22:58.249800   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 14:22:58.275914   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 14:22:58.275936   12169 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 14:22:58.282467   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 14:22:58.282501   12169 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 14:22:58.356829   12169 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 14:22:58.356861   12169 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 14:22:58.395076   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 14:22:58.395103   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 14:22:58.444496   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 14:22:58.453193   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 14:22:58.512057   12169 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 14:22:58.512090   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 14:22:58.621826   12169 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 14:22:58.621853   12169 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 14:22:58.738865   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 14:22:58.738893   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 14:22:58.805923   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 14:22:58.940276   12169 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 14:22:58.940301   12169 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 14:22:59.098411   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 14:22:59.098435   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 14:22:59.290689   12169 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 14:22:59.290710   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 14:22:59.494508   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 14:22:59.494537   12169 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 14:22:59.664638   12169 pod_ready.go:102] pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace has status "Ready":"False"
	I0719 14:22:59.689805   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 14:22:59.792431   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 14:22:59.792463   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 14:22:59.869427   12169 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.494575379s)
	I0719 14:22:59.869463   12169 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 14:23:00.309325   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 14:23:00.309353   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 14:23:00.384675   12169 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-018825" context rescaled to 1 replicas
	I0719 14:23:00.487227   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.954050452s)
	I0719 14:23:00.487294   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487306   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487302   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.927287998s)
	I0719 14:23:00.487347   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487362   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487382   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.924775843s)
	I0719 14:23:00.487417   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487430   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487721   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.487738   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.487747   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487756   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487803   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.487825   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.487833   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487841   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487810   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.487774   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.487868   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.487893   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487923   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487777   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.488062   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.488085   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.488147   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.488197   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.488215   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.488257   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.489885   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.489902   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.645104   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 14:23:00.645135   12169 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 14:23:00.980284   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 14:23:01.804104   12169 pod_ready.go:92] pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:01.804130   12169 pod_ready.go:81] duration metric: took 4.145709079s for pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:01.804177   12169 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t6d29" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:01.948784   12169 pod_ready.go:92] pod "coredns-7db6d8ff4d-t6d29" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:01.948810   12169 pod_ready.go:81] duration metric: took 144.623131ms for pod "coredns-7db6d8ff4d-t6d29" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:01.948822   12169 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.104324   12169 pod_ready.go:92] pod "etcd-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.104347   12169 pod_ready.go:81] duration metric: took 155.517694ms for pod "etcd-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.104355   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.236324   12169 pod_ready.go:92] pod "kube-apiserver-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.236348   12169 pod_ready.go:81] duration metric: took 131.984509ms for pod "kube-apiserver-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.236359   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.305861   12169 pod_ready.go:92] pod "kube-controller-manager-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.305880   12169 pod_ready.go:81] duration metric: took 69.514726ms for pod "kube-controller-manager-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.305891   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qkf6b" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.424938   12169 pod_ready.go:92] pod "kube-proxy-qkf6b" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.424959   12169 pod_ready.go:81] duration metric: took 119.061404ms for pod "kube-proxy-qkf6b" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.424969   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.542930   12169 pod_ready.go:92] pod "kube-scheduler-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.542954   12169 pod_ready.go:81] duration metric: took 117.97896ms for pod "kube-scheduler-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.542963   12169 pod_ready.go:38] duration metric: took 4.899567394s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:23:02.542976   12169 api_server.go:52] waiting for apiserver process to appear ...
	I0719 14:23:02.543026   12169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:23:02.879319   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.170691731s)
	I0719 14:23:02.879379   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879393   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879423   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.167812591s)
	I0719 14:23:02.879464   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.891088657s)
	I0719 14:23:02.879479   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879488   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.839512367s)
	I0719 14:23:02.879517   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879535   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879494   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879495   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879590   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879724   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879773   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879781   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.879789   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879795   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879855   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879864   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879875   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.879878   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879899   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.879906   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879913   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879950   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879977   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879977   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879992   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880001   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880009   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880018   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.880027   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879884   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.880086   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.880145   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.880169   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880175   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880285   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.880312   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880319   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880327   12169 addons.go:475] Verifying addon registry=true in "addons-018825"
	I0719 14:23:02.880340   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.880374   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880381   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.883210   12169 out.go:177] * Verifying registry addon...
	I0719 14:23:02.885686   12169 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 14:23:02.912166   12169 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 14:23:02.912193   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:02.984866   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.984885   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.985178   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.985201   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.985228   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:03.407827   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:03.891248   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:04.007417   12169 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 14:23:04.007453   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:23:04.010439   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.010803   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:23:04.010832   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.011001   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:23:04.011212   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:23:04.011394   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:23:04.011519   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:23:04.274446   12169 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 14:23:04.391335   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:04.532731   12169 addons.go:234] Setting addon gcp-auth=true in "addons-018825"
	I0719 14:23:04.532781   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:23:04.533078   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:23:04.533103   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:23:04.547767   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I0719 14:23:04.548234   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:23:04.548748   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:23:04.548773   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:23:04.549090   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:23:04.549691   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:23:04.549722   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:23:04.564909   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0719 14:23:04.565446   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:23:04.566002   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:23:04.566032   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:23:04.566452   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:23:04.566648   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:23:04.568337   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:23:04.568614   12169 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 14:23:04.568647   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:23:04.571701   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.572188   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:23:04.572216   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.572415   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:23:04.572613   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:23:04.572748   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:23:04.572890   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:23:04.895504   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:05.395962   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:05.913261   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:06.127498   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.01662294s)
	I0719 14:23:06.127545   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127557   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127506   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.001392587s)
	I0719 14:23:06.127593   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127607   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.68306074s)
	I0719 14:23:06.127620   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127649   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127660   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127692   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.674451805s)
	I0719 14:23:06.127735   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127755   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127785   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.321825889s)
	W0719 14:23:06.127812   12169 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 14:23:06.127844   12169 retry.go:31] will retry after 338.028309ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 14:23:06.128103   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128110   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128113   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128131   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128135   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128138   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128145   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128149   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128140   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128162   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128168   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128175   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128174   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.438334905s)
	I0719 14:23:06.128183   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128194   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128204   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128169   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128340   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128373   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128395   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128403   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128402   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128438   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128445   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128558   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128581   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128588   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128594   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128601   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128784   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128803   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128809   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128983   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.129004   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.129013   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.129013   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.129021   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.129022   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.129031   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.129032   12169 addons.go:475] Verifying addon metrics-server=true in "addons-018825"
	I0719 14:23:06.129917   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.129964   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.129982   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.129999   12169 addons.go:475] Verifying addon ingress=true in "addons-018825"
	I0719 14:23:06.131845   12169 out.go:177] * Verifying ingress addon...
	I0719 14:23:06.131859   12169 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-018825 service yakd-dashboard -n yakd-dashboard
	
	I0719 14:23:06.134713   12169 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 14:23:06.147490   12169 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 14:23:06.147510   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:06.185911   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.185932   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.186291   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.186310   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.186333   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.390185   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:06.466076   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 14:23:06.644688   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:06.923572   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:07.160370   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:07.163148   12169 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.620100174s)
	I0719 14:23:07.163189   12169 api_server.go:72] duration metric: took 10.300180868s to wait for apiserver process to appear ...
	I0719 14:23:07.163196   12169 api_server.go:88] waiting for apiserver healthz status ...
	I0719 14:23:07.163195   12169 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.594552094s)
	I0719 14:23:07.163214   12169 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0719 14:23:07.163853   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.183524063s)
	I0719 14:23:07.163892   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:07.163913   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:07.164179   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:07.164195   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:07.164207   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:07.164222   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:07.164233   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:07.164527   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:07.164547   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:07.164558   12169 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-018825"
	I0719 14:23:07.164769   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 14:23:07.165922   12169 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 14:23:07.167636   12169 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 14:23:07.168299   12169 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 14:23:07.169015   12169 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 14:23:07.169033   12169 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 14:23:07.174872   12169 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0719 14:23:07.176820   12169 api_server.go:141] control plane version: v1.30.3
	I0719 14:23:07.176843   12169 api_server.go:131] duration metric: took 13.640213ms to wait for apiserver health ...
	I0719 14:23:07.176852   12169 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 14:23:07.206861   12169 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 14:23:07.206884   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:07.221671   12169 system_pods.go:59] 19 kube-system pods found
	I0719 14:23:07.221699   12169 system_pods.go:61] "coredns-7db6d8ff4d-88nlf" [6469d6b1-e474-4454-8359-e084930e879c] Running
	I0719 14:23:07.221703   12169 system_pods.go:61] "coredns-7db6d8ff4d-t6d29" [388f181c-2c70-4115-b39c-a0cc5d9548aa] Running
	I0719 14:23:07.221709   12169 system_pods.go:61] "csi-hostpath-attacher-0" [324e961d-ccdf-4cac-9736-a5a22192761c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 14:23:07.221713   12169 system_pods.go:61] "csi-hostpath-resizer-0" [c715f347-d341-4f6d-a2e8-ad1d7984ea15] Pending
	I0719 14:23:07.221723   12169 system_pods.go:61] "csi-hostpathplugin-4xs8c" [7ba367f1-c7ae-4bf5-bc2e-3bbd75010f18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 14:23:07.221727   12169 system_pods.go:61] "etcd-addons-018825" [2b837606-a3c7-4683-b31d-b43122758097] Running
	I0719 14:23:07.221730   12169 system_pods.go:61] "kube-apiserver-addons-018825" [b6fcbfe0-a44a-42bb-a757-ee784dd55ab9] Running
	I0719 14:23:07.221733   12169 system_pods.go:61] "kube-controller-manager-addons-018825" [68b911e6-14f5-4e65-b9a0-4db60638da8c] Running
	I0719 14:23:07.221738   12169 system_pods.go:61] "kube-ingress-dns-minikube" [543d1957-29b4-4f11-a3ef-a50baed9131f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0719 14:23:07.221741   12169 system_pods.go:61] "kube-proxy-qkf6b" [fd641a61-241c-4387-86e8-432a465cb34d] Running
	I0719 14:23:07.221744   12169 system_pods.go:61] "kube-scheduler-addons-018825" [2a0ca51b-e5b5-45f2-bbcb-b8d1ec175fd2] Running
	I0719 14:23:07.221748   12169 system_pods.go:61] "metrics-server-c59844bb4-p76dw" [4f3616b2-3dcb-414f-930a-494df347f25f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 14:23:07.221754   12169 system_pods.go:61] "nvidia-device-plugin-daemonset-6bcnd" [ec6c8a36-43a7-42bd-bb5d-9840f023356c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 14:23:07.221761   12169 system_pods.go:61] "registry-656c9c8d9c-k884k" [f109574c-299a-469d-94a4-ad81e51b9efa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 14:23:07.221766   12169 system_pods.go:61] "registry-proxy-jq9hm" [90bf1ad6-3f9b-465b-aaa2-0d77bd8970a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 14:23:07.221770   12169 system_pods.go:61] "snapshot-controller-745499f584-9xmxh" [ae2b17e9-c4ba-43cc-8c77-8e6e7e3482d9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.221782   12169 system_pods.go:61] "snapshot-controller-745499f584-wvpct" [a7f7dd53-317c-497c-89aa-2440d0bd45bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.221785   12169 system_pods.go:61] "storage-provisioner" [7aaff945-7762-4f72-9ca2-8d34dd65bf35] Running
	I0719 14:23:07.221789   12169 system_pods.go:61] "tiller-deploy-6677d64bcd-c8ct4" [f5d05cf3-2614-4ccf-9d6f-5afb52d9c031] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 14:23:07.221796   12169 system_pods.go:74] duration metric: took 44.937964ms to wait for pod list to return data ...
	I0719 14:23:07.221806   12169 default_sa.go:34] waiting for default service account to be created ...
	I0719 14:23:07.253917   12169 default_sa.go:45] found service account: "default"
	I0719 14:23:07.253942   12169 default_sa.go:55] duration metric: took 32.130367ms for default service account to be created ...
	I0719 14:23:07.253951   12169 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 14:23:07.282822   12169 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 14:23:07.282846   12169 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 14:23:07.285919   12169 system_pods.go:86] 19 kube-system pods found
	I0719 14:23:07.285951   12169 system_pods.go:89] "coredns-7db6d8ff4d-88nlf" [6469d6b1-e474-4454-8359-e084930e879c] Running
	I0719 14:23:07.285960   12169 system_pods.go:89] "coredns-7db6d8ff4d-t6d29" [388f181c-2c70-4115-b39c-a0cc5d9548aa] Running
	I0719 14:23:07.285971   12169 system_pods.go:89] "csi-hostpath-attacher-0" [324e961d-ccdf-4cac-9736-a5a22192761c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 14:23:07.285979   12169 system_pods.go:89] "csi-hostpath-resizer-0" [c715f347-d341-4f6d-a2e8-ad1d7984ea15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 14:23:07.285987   12169 system_pods.go:89] "csi-hostpathplugin-4xs8c" [7ba367f1-c7ae-4bf5-bc2e-3bbd75010f18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 14:23:07.286001   12169 system_pods.go:89] "etcd-addons-018825" [2b837606-a3c7-4683-b31d-b43122758097] Running
	I0719 14:23:07.286013   12169 system_pods.go:89] "kube-apiserver-addons-018825" [b6fcbfe0-a44a-42bb-a757-ee784dd55ab9] Running
	I0719 14:23:07.286021   12169 system_pods.go:89] "kube-controller-manager-addons-018825" [68b911e6-14f5-4e65-b9a0-4db60638da8c] Running
	I0719 14:23:07.286050   12169 system_pods.go:89] "kube-ingress-dns-minikube" [543d1957-29b4-4f11-a3ef-a50baed9131f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0719 14:23:07.286060   12169 system_pods.go:89] "kube-proxy-qkf6b" [fd641a61-241c-4387-86e8-432a465cb34d] Running
	I0719 14:23:07.286064   12169 system_pods.go:89] "kube-scheduler-addons-018825" [2a0ca51b-e5b5-45f2-bbcb-b8d1ec175fd2] Running
	I0719 14:23:07.286069   12169 system_pods.go:89] "metrics-server-c59844bb4-p76dw" [4f3616b2-3dcb-414f-930a-494df347f25f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 14:23:07.286076   12169 system_pods.go:89] "nvidia-device-plugin-daemonset-6bcnd" [ec6c8a36-43a7-42bd-bb5d-9840f023356c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 14:23:07.286088   12169 system_pods.go:89] "registry-656c9c8d9c-k884k" [f109574c-299a-469d-94a4-ad81e51b9efa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 14:23:07.286094   12169 system_pods.go:89] "registry-proxy-jq9hm" [90bf1ad6-3f9b-465b-aaa2-0d77bd8970a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 14:23:07.286102   12169 system_pods.go:89] "snapshot-controller-745499f584-9xmxh" [ae2b17e9-c4ba-43cc-8c77-8e6e7e3482d9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.286109   12169 system_pods.go:89] "snapshot-controller-745499f584-wvpct" [a7f7dd53-317c-497c-89aa-2440d0bd45bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.286113   12169 system_pods.go:89] "storage-provisioner" [7aaff945-7762-4f72-9ca2-8d34dd65bf35] Running
	I0719 14:23:07.286119   12169 system_pods.go:89] "tiller-deploy-6677d64bcd-c8ct4" [f5d05cf3-2614-4ccf-9d6f-5afb52d9c031] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 14:23:07.286128   12169 system_pods.go:126] duration metric: took 32.171835ms to wait for k8s-apps to be running ...
	I0719 14:23:07.286135   12169 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 14:23:07.286177   12169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:23:07.362900   12169 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 14:23:07.362922   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 14:23:07.391761   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:07.441153   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 14:23:07.639752   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:07.686052   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:07.898628   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:08.139791   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:08.174892   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:08.393795   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:08.497937   12169 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211737053s)
	I0719 14:23:08.497989   12169 system_svc.go:56] duration metric: took 1.211834363s WaitForService to wait for kubelet
	I0719 14:23:08.498000   12169 kubeadm.go:582] duration metric: took 11.63499022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:23:08.498022   12169 node_conditions.go:102] verifying NodePressure condition ...
	I0719 14:23:08.498674   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.032532328s)
	I0719 14:23:08.498734   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:08.498755   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:08.499070   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:08.499134   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:08.499143   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:08.499159   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:08.499167   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:08.499375   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:08.499451   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:08.499462   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:08.504146   12169 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:23:08.504168   12169 node_conditions.go:123] node cpu capacity is 2
	I0719 14:23:08.504180   12169 node_conditions.go:105] duration metric: took 6.152829ms to run NodePressure ...
	I0719 14:23:08.504194   12169 start.go:241] waiting for startup goroutines ...
	I0719 14:23:08.638663   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:08.675873   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:08.959505   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:09.067561   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.626361493s)
	I0719 14:23:09.067624   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:09.067642   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:09.067955   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:09.068014   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:09.068025   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:09.068038   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:09.068046   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:09.068274   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:09.068320   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:09.070217   12169 addons.go:475] Verifying addon gcp-auth=true in "addons-018825"
	I0719 14:23:09.071882   12169 out.go:177] * Verifying gcp-auth addon...
	I0719 14:23:09.074074   12169 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 14:23:09.089191   12169 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 14:23:09.089222   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:09.148173   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:09.183692   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:09.390922   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:09.578499   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:09.640142   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:09.674945   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:09.890550   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:10.080360   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:10.140746   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:10.175692   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:10.390496   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:10.578943   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:10.639790   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:10.674734   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:10.891343   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:11.077235   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:11.140280   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:11.174816   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:11.391452   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:11.577866   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:11.639822   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:11.676853   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:11.891725   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:12.077705   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:12.139117   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:12.174390   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:12.390279   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:12.579319   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:12.639857   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:12.674067   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:13.016315   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:13.080513   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:13.139691   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:13.190397   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:13.390860   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:13.578055   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:13.640689   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:13.673991   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:13.890457   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:14.077916   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:14.139575   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:14.178283   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:14.391033   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:14.577298   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:14.639167   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:14.673530   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:14.890099   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:15.078050   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:15.139817   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:15.176544   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:15.389661   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:15.578152   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:15.640765   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:15.674076   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:15.891501   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:16.077518   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:16.138991   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:16.173551   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:16.389786   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:16.577883   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:16.639814   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:16.674356   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:16.891146   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:17.078498   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:17.139496   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:17.174372   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:17.390793   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:17.578482   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:17.639643   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:17.677576   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:17.890172   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:18.078906   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:18.139874   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:18.174417   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:18.392732   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:18.579446   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:18.639826   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:18.673242   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:18.890433   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:19.076952   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:19.140064   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:19.176726   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:19.390535   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:19.577336   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:19.641603   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:19.674621   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:19.891647   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:20.077440   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:20.138933   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:20.174538   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:20.390041   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:20.578578   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:20.640970   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:20.677311   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:20.891038   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:21.080158   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:21.140940   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:21.181419   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:21.389738   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:21.577442   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:21.639174   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:21.674456   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:22.219799   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:22.223563   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:22.224377   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:22.233671   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:22.393021   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:22.577811   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:22.639329   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:22.673564   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:22.890215   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:23.077828   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:23.140932   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:23.174169   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:23.393007   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:23.580062   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:23.640594   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:23.674794   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:23.891243   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:24.079186   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:24.142738   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:24.174851   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:24.393109   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:24.578330   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:24.638726   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:24.680561   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:24.890430   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:25.078589   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:25.139672   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:25.174175   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:25.391245   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:25.578058   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:25.639491   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:25.673629   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:25.890802   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:26.078315   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:26.141185   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:26.175425   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:26.393239   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:26.578148   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:26.638706   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:26.673773   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:26.890921   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:27.078192   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:27.139401   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:27.174151   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:27.390368   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:27.577334   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:27.639371   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:27.674403   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:27.890993   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:28.077722   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:28.139565   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:28.175026   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:28.390304   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:28.577850   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:28.640095   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:28.672887   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:28.890036   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:29.078027   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:29.139836   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:29.173732   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:29.391742   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:29.578351   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:29.639083   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:29.674203   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:29.890644   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:30.077879   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:30.139546   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:30.174792   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:30.390248   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:30.578798   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:30.640079   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:30.674039   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:30.890980   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:31.078380   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:31.139997   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:31.172703   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:31.391041   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:31.578567   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:31.643445   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:31.674107   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:31.892495   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:32.077533   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:32.139462   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:32.174095   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:32.390808   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:32.577905   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:32.639599   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:32.674109   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:32.890466   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:33.077984   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:33.139703   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:33.179896   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:33.391628   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:33.966141   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:33.966391   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:33.967311   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:33.969556   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:34.077338   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:34.138763   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:34.174418   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:34.392507   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:34.578439   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:34.639480   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:34.674201   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:34.891012   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:35.078038   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:35.138916   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:35.175055   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:35.390628   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:35.577896   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:35.639415   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:35.673627   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:35.892851   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:36.079534   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:36.139810   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:36.174674   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:36.391174   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:36.577841   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:36.640276   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:36.675547   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:36.891595   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:37.077744   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:37.139827   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:37.176158   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:37.392560   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:37.578295   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:37.641879   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:37.676042   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:38.114496   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:38.114656   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:38.339762   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:38.341832   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:38.390903   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:38.577826   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:38.639813   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:38.673952   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:38.890839   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:39.079666   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:39.140167   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:39.174255   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:39.391085   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:39.580376   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:39.640729   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:39.676466   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:39.890500   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:40.078938   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:40.140294   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:40.174570   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:40.390484   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:40.577609   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:40.639498   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:40.673992   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:40.892738   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:41.078167   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:41.138600   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:41.173886   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:41.390716   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:41.594855   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:41.640027   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:41.677550   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:42.137361   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:42.138111   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:42.161283   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:42.175453   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:42.391340   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:42.578531   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:42.639663   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:42.675245   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:42.892488   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:43.077295   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:43.138689   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:43.174498   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:43.390871   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:43.577506   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:43.639915   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:43.673623   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:43.891302   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:44.078088   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:44.139348   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:44.184873   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:44.392916   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:44.578401   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:44.639533   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:44.674778   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:44.892898   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:45.078064   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:45.139548   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:45.173400   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:45.391161   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:45.578566   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:45.639517   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:45.674426   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:45.890419   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:46.077448   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:46.140449   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:46.174094   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:46.391020   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:46.577993   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:46.638638   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:46.673551   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:46.891100   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:47.077735   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:47.139247   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:47.173601   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:47.391050   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:47.580098   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:47.640258   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:47.674653   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:47.891066   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:48.078311   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:48.139594   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:48.173701   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:48.389940   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:48.579225   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:48.638932   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:48.674913   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:48.890980   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:49.078050   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:49.138739   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:49.173975   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:49.392518   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:49.578099   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:49.639854   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:49.674534   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:49.890274   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:50.078248   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:50.139149   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:50.173658   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:50.394871   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:50.578609   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:50.639776   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:50.675582   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:50.891284   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:51.080933   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:51.139495   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:51.173911   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:51.390256   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:51.579961   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:51.639764   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:51.674042   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:51.894306   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:52.184560   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:52.184953   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:52.187424   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:52.389726   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:52.578111   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:52.638577   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:52.673649   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:52.893123   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:53.078264   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:53.138382   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:53.174076   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:53.391688   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:53.577921   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:53.639376   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:53.673635   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:53.891301   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:54.078225   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:54.138863   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:54.174357   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:54.390361   12169 kapi.go:107] duration metric: took 51.504674408s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 14:23:54.577229   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:54.639302   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:54.673655   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:55.078055   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:55.139351   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:55.173757   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:55.578338   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:55.638916   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:55.674126   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:56.078082   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:56.139188   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:56.177136   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:56.579090   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:56.639294   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:56.674070   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:57.078679   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:57.139375   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:57.173409   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:57.765515   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:57.766060   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:57.766493   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:58.079510   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:58.139357   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:58.173663   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:58.578341   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:58.641243   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:58.673486   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:59.077824   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:59.139664   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:59.173880   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:59.577610   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:59.640471   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:59.674615   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:00.089084   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:00.147649   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:00.181957   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:00.607085   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:00.638996   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:00.674654   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:01.078830   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:01.139683   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:01.174678   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:01.582376   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:01.639195   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:01.674127   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:02.079991   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:02.138652   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:02.181639   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:02.578160   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:02.638772   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:02.675479   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:03.078174   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:03.375809   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:03.380167   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:03.577663   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:03.639357   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:03.674024   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:04.078079   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:04.138692   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:04.175546   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:04.578566   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:04.639335   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:04.675736   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:05.079197   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:05.140682   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:05.177342   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:05.578162   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:05.639367   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:05.673792   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:06.078003   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:06.138644   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:06.176198   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:06.754251   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:06.754689   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:06.754826   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:07.077709   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:07.139991   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:07.173837   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:07.577969   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:07.639965   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:07.674099   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:08.077790   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:08.140469   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:08.175987   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:08.579423   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:08.650983   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:08.674429   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:09.077728   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:09.151521   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:09.173915   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:09.586855   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:09.656758   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:09.675669   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:10.077285   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:10.139496   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:10.173914   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:10.578798   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:10.640569   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:10.679558   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:11.077333   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:11.139125   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:11.173193   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:11.578116   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:11.639305   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:11.674936   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:12.078294   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:12.139449   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:12.174132   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:12.577354   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:12.639325   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:12.682051   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:13.080952   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:13.141844   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:13.176464   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:13.577462   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:13.639073   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:13.677193   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:14.095590   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:14.141219   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:14.176853   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:14.731374   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:14.733292   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:14.749345   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:15.077556   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:15.141142   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:15.174621   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:15.577586   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:15.639607   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:15.675455   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:16.078423   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:16.139479   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:16.173958   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:16.578200   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:16.639511   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:16.674451   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:17.078025   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:17.139045   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:17.177630   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:17.578373   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:17.639379   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:17.673909   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:18.080185   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:18.160958   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:18.174707   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:18.790841   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:18.793352   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:18.794120   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:19.077790   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:19.139525   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:19.174000   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:19.578961   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:19.641521   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:19.674485   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:20.077825   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:20.143928   12169 kapi.go:107] duration metric: took 1m14.009212902s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 14:24:20.175434   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:20.578048   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:20.678177   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:21.079242   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:21.174567   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:21.578717   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:21.673721   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:22.078947   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:22.174166   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:22.578158   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:22.674433   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:23.078170   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:23.183244   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:23.577745   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:23.673727   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:24.078933   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:24.184709   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:24.577552   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:24.680034   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:25.077975   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:25.181450   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:25.578555   12169 kapi.go:107] duration metric: took 1m16.504475345s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 14:24:25.580480   12169 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-018825 cluster.
	I0719 14:24:25.582128   12169 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 14:24:25.583526   12169 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 14:24:25.675289   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:26.177915   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:26.675615   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:27.176919   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:27.674799   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:28.173592   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:28.675166   12169 kapi.go:107] duration metric: took 1m21.506864361s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 14:24:28.677070   12169 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, helm-tiller, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0719 14:24:28.678487   12169 addons.go:510] duration metric: took 1m31.815591478s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner helm-tiller storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0719 14:24:28.678534   12169 start.go:246] waiting for cluster config update ...
	I0719 14:24:28.678551   12169 start.go:255] writing updated cluster config ...
	I0719 14:24:28.678807   12169 ssh_runner.go:195] Run: rm -f paused
	I0719 14:24:28.732494   12169 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 14:24:28.734399   12169 out.go:177] * Done! kubectl is now configured to use "addons-018825" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.574616475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721399236574563540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d289b548-0436-4d53-a38d-5f3e978a809c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.575215942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6454abc9-94d3-4589-a8fd-1f8ba600186d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.575289695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6454abc9-94d3-4589-a8fd-1f8ba600186d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.575674359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c79e9129f463c0309fd90ff9b38876b3c5a544d7e307b07981a971c8c422f0a,PodSandboxId:45805c6b967d961e1c6735116e68385497482e29bbff32542328d6cf541f9578,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721399229767556511,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xms8k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6,},Annotations:map[string]string{io.kubernetes.container.hash: 291aca0a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd94bfe746ea4b78e20f44c246e956a69394730c4b395c304936eb6419f0e63,PodSandboxId:32b308f7c4685423bcd52c889bac3d1df242a74550e550cfacdcb13aadc92217,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721399088077353431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e717529c-0e3d-45e0-a926-ef718c1b5993,},Annotations:map[string]string{io.kubernet
es.container.hash: 7a55caeb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe5104151282adb80de7f53c855a9618bf35fe93d56fa8bc15e18059f3c9c29,PodSandboxId:0222f2b642c6c32c76b4e09c1e861e29c1e14fb7f664c943fbe43a9d8c1a9c51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721399075992757790,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-zbbqp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8b18ef56-46ef-41f8-a085-3840463e848b,},Annotations:map[string]string{io.kubernetes.container.hash: 9bb964ab,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7,PodSandboxId:17abd8263031755aab6ee85264043bc7e8d6e79bdd3a34aea3d75833a3510996,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721399064085070163,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-jcn9w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 48482eee-1155-4bd1-815d-da6c964eb84b,},Annotations:map[string]string{io.kubernetes.container.hash: a0b8e1ab,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac035dda401e0303701cd54a1fc03ced08976a403af563c6acdf8db18ab99cc,PodSandboxId:2ae4a45e4078671722e7c592f3214613d580e0c19e9507a0cf1c9300294406de,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721399061906254637,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxxgr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d08ce95-d2e1-4ba5-beae-c238a9ce51ed,},Annotations:map[string]string{io.kubernetes.container.hash: ec068690,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f6b1210c9e27bfbdc4b48fa3f1617ec520443c05a456d45c302ca42035bb408,PodSandboxId:4dd33747db76e7722e04191206b21aea283f900bed1e212223eb7b962c3e4748,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21399048312215603,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nqmn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f936357e-6fbc-4e23-ba57-00add2837377,},Annotations:map[string]string{io.kubernetes.container.hash: ad474411,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4e7846f214403e557a2e0d9c3560c754e30ae4cead349602afab457a5b134b,PodSandboxId:40d090ea3c19fb34c6248e05a81bd4236415cee2b5cada6ad79b79a00371259f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cr
eatedAt:1721399039428328313,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-hw6vk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2dad5a45-80c8-4d63-aadc-d2166af16dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a8be92c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec,PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721399024638069798,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-p76dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f3616b2-3dcb-414f-930a-494df347f25f,},Annotations:map[string]string{io.kubernetes.container.hash: 557ea971,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d,PodSandboxId:2c2739e1b982d5074b92fcfabaf52125c29abf02b659aa5fcfec7c5a26b89c91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d62
8db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721398983939675151,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaff945-7762-4f72-9ca2-8d34dd65bf35,},Annotations:map[string]string{io.kubernetes.container.hash: 6018e0a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb,PodSandboxId:c3230c7b31066b79b685df03db3c8864db0b6180c12a9187331779ef31c686dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721398979612365622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-88nlf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6469d6b1-e474-4454-8359-e084930e879c,},Annotations:map[string]string{io.kubernetes.container.hash: 44476b1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f579
5a1085,PodSandboxId:52e9714bd92975ec23e00fa14369a463bb4e15f8ff5d22641737bc63dadea087,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721398976905464359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qkf6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd641a61-241c-4387-86e8-432a465cb34d,},Annotations:map[string]string{io.kubernetes.container.hash: d1e7466e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a,PodSandboxId:3f8b6c88df5ee
1404e310dcafcd242c1ef5e451c25e99402a58b0dc03b7d300c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721398957767810131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c11abc729c66d57f89d84b110e6d88,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860,PodSandboxId:c453e1c48b50c2bec77c372aaedf88
0f4cd8d56d7ef323090f51ebe002f73b11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721398957760469365,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea775724addcafb0355b562dd786d99d,},Annotations:map[string]string{io.kubernetes.container.hash: b0193324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767,PodSandboxId:bf669a39f6ed683018bfcb341d285ee52ed8941460f1cca75acda41afeb0308c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721398957728984706,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f6fc113fd5e070230b8073bfafcb51,},Annotations:map[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b,PodSandboxId:075623588f2bc92065c01198b67a8f05997bfe5c4e6f2b887283fe4e8d5168e9,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721398957737089898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36d31b2a7c34ccb5f227ebdde65c177,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6454abc9-94d3-4589-a8fd-1f8ba600186d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.616355608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a66d969-83b2-4823-8f1c-e897ec2df120 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.616465915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a66d969-83b2-4823-8f1c-e897ec2df120 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.627916364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0f34a7d-6e51-4a8b-ba2c-9b269e985434 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.629308304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721399236629285835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0f34a7d-6e51-4a8b-ba2c-9b269e985434 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.630343685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72680e66-67be-4973-b284-e4a86319ca2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.630416232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72680e66-67be-4973-b284-e4a86319ca2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.630954460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c79e9129f463c0309fd90ff9b38876b3c5a544d7e307b07981a971c8c422f0a,PodSandboxId:45805c6b967d961e1c6735116e68385497482e29bbff32542328d6cf541f9578,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721399229767556511,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xms8k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6,},Annotations:map[string]string{io.kubernetes.container.hash: 291aca0a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd94bfe746ea4b78e20f44c246e956a69394730c4b395c304936eb6419f0e63,PodSandboxId:32b308f7c4685423bcd52c889bac3d1df242a74550e550cfacdcb13aadc92217,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721399088077353431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e717529c-0e3d-45e0-a926-ef718c1b5993,},Annotations:map[string]string{io.kubernet
es.container.hash: 7a55caeb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe5104151282adb80de7f53c855a9618bf35fe93d56fa8bc15e18059f3c9c29,PodSandboxId:0222f2b642c6c32c76b4e09c1e861e29c1e14fb7f664c943fbe43a9d8c1a9c51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721399075992757790,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-zbbqp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8b18ef56-46ef-41f8-a085-3840463e848b,},Annotations:map[string]string{io.kubernetes.container.hash: 9bb964ab,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7,PodSandboxId:17abd8263031755aab6ee85264043bc7e8d6e79bdd3a34aea3d75833a3510996,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721399064085070163,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-jcn9w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 48482eee-1155-4bd1-815d-da6c964eb84b,},Annotations:map[string]string{io.kubernetes.container.hash: a0b8e1ab,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac035dda401e0303701cd54a1fc03ced08976a403af563c6acdf8db18ab99cc,PodSandboxId:2ae4a45e4078671722e7c592f3214613d580e0c19e9507a0cf1c9300294406de,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721399061906254637,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxxgr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d08ce95-d2e1-4ba5-beae-c238a9ce51ed,},Annotations:map[string]string{io.kubernetes.container.hash: ec068690,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f6b1210c9e27bfbdc4b48fa3f1617ec520443c05a456d45c302ca42035bb408,PodSandboxId:4dd33747db76e7722e04191206b21aea283f900bed1e212223eb7b962c3e4748,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21399048312215603,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nqmn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f936357e-6fbc-4e23-ba57-00add2837377,},Annotations:map[string]string{io.kubernetes.container.hash: ad474411,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4e7846f214403e557a2e0d9c3560c754e30ae4cead349602afab457a5b134b,PodSandboxId:40d090ea3c19fb34c6248e05a81bd4236415cee2b5cada6ad79b79a00371259f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cr
eatedAt:1721399039428328313,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-hw6vk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2dad5a45-80c8-4d63-aadc-d2166af16dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a8be92c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec,PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721399024638069798,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-p76dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f3616b2-3dcb-414f-930a-494df347f25f,},Annotations:map[string]string{io.kubernetes.container.hash: 557ea971,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d,PodSandboxId:2c2739e1b982d5074b92fcfabaf52125c29abf02b659aa5fcfec7c5a26b89c91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d62
8db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721398983939675151,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaff945-7762-4f72-9ca2-8d34dd65bf35,},Annotations:map[string]string{io.kubernetes.container.hash: 6018e0a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb,PodSandboxId:c3230c7b31066b79b685df03db3c8864db0b6180c12a9187331779ef31c686dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721398979612365622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-88nlf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6469d6b1-e474-4454-8359-e084930e879c,},Annotations:map[string]string{io.kubernetes.container.hash: 44476b1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f579
5a1085,PodSandboxId:52e9714bd92975ec23e00fa14369a463bb4e15f8ff5d22641737bc63dadea087,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721398976905464359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qkf6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd641a61-241c-4387-86e8-432a465cb34d,},Annotations:map[string]string{io.kubernetes.container.hash: d1e7466e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a,PodSandboxId:3f8b6c88df5ee
1404e310dcafcd242c1ef5e451c25e99402a58b0dc03b7d300c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721398957767810131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c11abc729c66d57f89d84b110e6d88,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860,PodSandboxId:c453e1c48b50c2bec77c372aaedf88
0f4cd8d56d7ef323090f51ebe002f73b11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721398957760469365,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea775724addcafb0355b562dd786d99d,},Annotations:map[string]string{io.kubernetes.container.hash: b0193324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767,PodSandboxId:bf669a39f6ed683018bfcb341d285ee52ed8941460f1cca75acda41afeb0308c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721398957728984706,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f6fc113fd5e070230b8073bfafcb51,},Annotations:map[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b,PodSandboxId:075623588f2bc92065c01198b67a8f05997bfe5c4e6f2b887283fe4e8d5168e9,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721398957737089898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36d31b2a7c34ccb5f227ebdde65c177,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72680e66-67be-4973-b284-e4a86319ca2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.665419166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb31f121-e58e-4700-bc38-e778793e6e76 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.665713550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb31f121-e58e-4700-bc38-e778793e6e76 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.666911698Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afdf373a-90ac-4ac0-889c-ff87099af0fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.668370895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721399236668343885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afdf373a-90ac-4ac0-889c-ff87099af0fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.668958207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=648c3451-91bd-4762-8c0c-bae60d167930 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.669029405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=648c3451-91bd-4762-8c0c-bae60d167930 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.669346613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c79e9129f463c0309fd90ff9b38876b3c5a544d7e307b07981a971c8c422f0a,PodSandboxId:45805c6b967d961e1c6735116e68385497482e29bbff32542328d6cf541f9578,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721399229767556511,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xms8k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6,},Annotations:map[string]string{io.kubernetes.container.hash: 291aca0a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd94bfe746ea4b78e20f44c246e956a69394730c4b395c304936eb6419f0e63,PodSandboxId:32b308f7c4685423bcd52c889bac3d1df242a74550e550cfacdcb13aadc92217,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721399088077353431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e717529c-0e3d-45e0-a926-ef718c1b5993,},Annotations:map[string]string{io.kubernet
es.container.hash: 7a55caeb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe5104151282adb80de7f53c855a9618bf35fe93d56fa8bc15e18059f3c9c29,PodSandboxId:0222f2b642c6c32c76b4e09c1e861e29c1e14fb7f664c943fbe43a9d8c1a9c51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721399075992757790,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-zbbqp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8b18ef56-46ef-41f8-a085-3840463e848b,},Annotations:map[string]string{io.kubernetes.container.hash: 9bb964ab,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7,PodSandboxId:17abd8263031755aab6ee85264043bc7e8d6e79bdd3a34aea3d75833a3510996,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721399064085070163,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-jcn9w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 48482eee-1155-4bd1-815d-da6c964eb84b,},Annotations:map[string]string{io.kubernetes.container.hash: a0b8e1ab,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac035dda401e0303701cd54a1fc03ced08976a403af563c6acdf8db18ab99cc,PodSandboxId:2ae4a45e4078671722e7c592f3214613d580e0c19e9507a0cf1c9300294406de,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721399061906254637,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxxgr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d08ce95-d2e1-4ba5-beae-c238a9ce51ed,},Annotations:map[string]string{io.kubernetes.container.hash: ec068690,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f6b1210c9e27bfbdc4b48fa3f1617ec520443c05a456d45c302ca42035bb408,PodSandboxId:4dd33747db76e7722e04191206b21aea283f900bed1e212223eb7b962c3e4748,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21399048312215603,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nqmn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f936357e-6fbc-4e23-ba57-00add2837377,},Annotations:map[string]string{io.kubernetes.container.hash: ad474411,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4e7846f214403e557a2e0d9c3560c754e30ae4cead349602afab457a5b134b,PodSandboxId:40d090ea3c19fb34c6248e05a81bd4236415cee2b5cada6ad79b79a00371259f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cr
eatedAt:1721399039428328313,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-hw6vk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2dad5a45-80c8-4d63-aadc-d2166af16dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a8be92c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec,PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721399024638069798,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-p76dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f3616b2-3dcb-414f-930a-494df347f25f,},Annotations:map[string]string{io.kubernetes.container.hash: 557ea971,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d,PodSandboxId:2c2739e1b982d5074b92fcfabaf52125c29abf02b659aa5fcfec7c5a26b89c91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d62
8db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721398983939675151,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaff945-7762-4f72-9ca2-8d34dd65bf35,},Annotations:map[string]string{io.kubernetes.container.hash: 6018e0a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb,PodSandboxId:c3230c7b31066b79b685df03db3c8864db0b6180c12a9187331779ef31c686dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721398979612365622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-88nlf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6469d6b1-e474-4454-8359-e084930e879c,},Annotations:map[string]string{io.kubernetes.container.hash: 44476b1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f579
5a1085,PodSandboxId:52e9714bd92975ec23e00fa14369a463bb4e15f8ff5d22641737bc63dadea087,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721398976905464359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qkf6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd641a61-241c-4387-86e8-432a465cb34d,},Annotations:map[string]string{io.kubernetes.container.hash: d1e7466e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a,PodSandboxId:3f8b6c88df5ee
1404e310dcafcd242c1ef5e451c25e99402a58b0dc03b7d300c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721398957767810131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c11abc729c66d57f89d84b110e6d88,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860,PodSandboxId:c453e1c48b50c2bec77c372aaedf88
0f4cd8d56d7ef323090f51ebe002f73b11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721398957760469365,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea775724addcafb0355b562dd786d99d,},Annotations:map[string]string{io.kubernetes.container.hash: b0193324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767,PodSandboxId:bf669a39f6ed683018bfcb341d285ee52ed8941460f1cca75acda41afeb0308c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721398957728984706,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f6fc113fd5e070230b8073bfafcb51,},Annotations:map[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b,PodSandboxId:075623588f2bc92065c01198b67a8f05997bfe5c4e6f2b887283fe4e8d5168e9,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721398957737089898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36d31b2a7c34ccb5f227ebdde65c177,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=648c3451-91bd-4762-8c0c-bae60d167930 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.707189510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f12e33e-143d-48a1-9ef9-de87a810b963 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.707281787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f12e33e-143d-48a1-9ef9-de87a810b963 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.708646011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f6ac6a5-133d-4c9a-9cf2-df1c2d9f446e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.710010772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721399236709980662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f6ac6a5-133d-4c9a-9cf2-df1c2d9f446e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.710667188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6182dc14-a872-4d28-ac37-75a526148c41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.710728069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6182dc14-a872-4d28-ac37-75a526148c41 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:27:16 addons-018825 crio[682]: time="2024-07-19 14:27:16.711030576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c79e9129f463c0309fd90ff9b38876b3c5a544d7e307b07981a971c8c422f0a,PodSandboxId:45805c6b967d961e1c6735116e68385497482e29bbff32542328d6cf541f9578,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721399229767556511,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xms8k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6,},Annotations:map[string]string{io.kubernetes.container.hash: 291aca0a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd94bfe746ea4b78e20f44c246e956a69394730c4b395c304936eb6419f0e63,PodSandboxId:32b308f7c4685423bcd52c889bac3d1df242a74550e550cfacdcb13aadc92217,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721399088077353431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e717529c-0e3d-45e0-a926-ef718c1b5993,},Annotations:map[string]string{io.kubernet
es.container.hash: 7a55caeb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe5104151282adb80de7f53c855a9618bf35fe93d56fa8bc15e18059f3c9c29,PodSandboxId:0222f2b642c6c32c76b4e09c1e861e29c1e14fb7f664c943fbe43a9d8c1a9c51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721399075992757790,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-zbbqp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8b18ef56-46ef-41f8-a085-3840463e848b,},Annotations:map[string]string{io.kubernetes.container.hash: 9bb964ab,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7,PodSandboxId:17abd8263031755aab6ee85264043bc7e8d6e79bdd3a34aea3d75833a3510996,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721399064085070163,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-jcn9w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 48482eee-1155-4bd1-815d-da6c964eb84b,},Annotations:map[string]string{io.kubernetes.container.hash: a0b8e1ab,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fac035dda401e0303701cd54a1fc03ced08976a403af563c6acdf8db18ab99cc,PodSandboxId:2ae4a45e4078671722e7c592f3214613d580e0c19e9507a0cf1c9300294406de,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721399061906254637,Labels:map[string]
string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-zxxgr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8d08ce95-d2e1-4ba5-beae-c238a9ce51ed,},Annotations:map[string]string{io.kubernetes.container.hash: ec068690,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f6b1210c9e27bfbdc4b48fa3f1617ec520443c05a456d45c302ca42035bb408,PodSandboxId:4dd33747db76e7722e04191206b21aea283f900bed1e212223eb7b962c3e4748,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:17
21399048312215603,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7nqmn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f936357e-6fbc-4e23-ba57-00add2837377,},Annotations:map[string]string{io.kubernetes.container.hash: ad474411,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4e7846f214403e557a2e0d9c3560c754e30ae4cead349602afab457a5b134b,PodSandboxId:40d090ea3c19fb34c6248e05a81bd4236415cee2b5cada6ad79b79a00371259f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,Cr
eatedAt:1721399039428328313,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-hw6vk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2dad5a45-80c8-4d63-aadc-d2166af16dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a8be92c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec,PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721399024638069798,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-p76dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f3616b2-3dcb-414f-930a-494df347f25f,},Annotations:map[string]string{io.kubernetes.container.hash: 557ea971,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d,PodSandboxId:2c2739e1b982d5074b92fcfabaf52125c29abf02b659aa5fcfec7c5a26b89c91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d62
8db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721398983939675151,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaff945-7762-4f72-9ca2-8d34dd65bf35,},Annotations:map[string]string{io.kubernetes.container.hash: 6018e0a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb,PodSandboxId:c3230c7b31066b79b685df03db3c8864db0b6180c12a9187331779ef31c686dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674f
b0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721398979612365622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-88nlf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6469d6b1-e474-4454-8359-e084930e879c,},Annotations:map[string]string{io.kubernetes.container.hash: 44476b1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f579
5a1085,PodSandboxId:52e9714bd92975ec23e00fa14369a463bb4e15f8ff5d22641737bc63dadea087,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721398976905464359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qkf6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd641a61-241c-4387-86e8-432a465cb34d,},Annotations:map[string]string{io.kubernetes.container.hash: d1e7466e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a,PodSandboxId:3f8b6c88df5ee
1404e310dcafcd242c1ef5e451c25e99402a58b0dc03b7d300c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721398957767810131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c11abc729c66d57f89d84b110e6d88,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860,PodSandboxId:c453e1c48b50c2bec77c372aaedf88
0f4cd8d56d7ef323090f51ebe002f73b11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721398957760469365,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea775724addcafb0355b562dd786d99d,},Annotations:map[string]string{io.kubernetes.container.hash: b0193324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767,PodSandboxId:bf669a39f6ed683018bfcb341d285ee52ed8941460f1cca75acda41afeb0308c,Metadata:&Co
ntainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721398957728984706,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f6fc113fd5e070230b8073bfafcb51,},Annotations:map[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b,PodSandboxId:075623588f2bc92065c01198b67a8f05997bfe5c4e6f2b887283fe4e8d5168e9,Metadata:&ContainerMetadata{N
ame:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721398957737089898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36d31b2a7c34ccb5f227ebdde65c177,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6182dc14-a872-4d28-ac37-75a526148c41 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c79e9129f463       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   45805c6b967d9       hello-world-app-6778b5fc9f-xms8k
	2dd94bfe746ea       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   32b308f7c4685       nginx
	9fe5104151282       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   0222f2b642c6c       headlamp-7867546754-zbbqp
	20d9dbb2af5f6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   17abd82630317       gcp-auth-5db96cd9b4-jcn9w
	fac035dda401e       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             2 minutes ago       Exited              patch                     2                   2ae4a45e40786       ingress-nginx-admission-patch-zxxgr
	9f6b1210c9e27       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   4dd33747db76e       ingress-nginx-admission-create-7nqmn
	4a4e7846f2144       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              3 minutes ago       Running             yakd                      0                   40d090ea3c19f       yakd-dashboard-799879c74f-hw6vk
	7079bee0e4947       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   0b73c2002a246       metrics-server-c59844bb4-p76dw
	822879e4213fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   2c2739e1b982d       storage-provisioner
	422891b0c477f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   c3230c7b31066       coredns-7db6d8ff4d-88nlf
	fd66a0731caf0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             4 minutes ago       Running             kube-proxy                0                   52e9714bd9297       kube-proxy-qkf6b
	e3dc32aa02fb3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             4 minutes ago       Running             kube-scheduler            0                   3f8b6c88df5ee       kube-scheduler-addons-018825
	9144c07256374       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   c453e1c48b50c       etcd-addons-018825
	b6ca310eec97b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             4 minutes ago       Running             kube-controller-manager   0                   075623588f2bc       kube-controller-manager-addons-018825
	810c03a705d7a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             4 minutes ago       Running             kube-apiserver            0                   bf669a39f6ed6       kube-apiserver-addons-018825
	
	
	==> coredns [422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb] <==
	[INFO] 10.244.0.8:40121 - 58535 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000174239s
	[INFO] 10.244.0.8:35746 - 24813 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106351s
	[INFO] 10.244.0.8:35746 - 29423 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055441s
	[INFO] 10.244.0.8:46291 - 52391 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122056s
	[INFO] 10.244.0.8:46291 - 58789 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080723s
	[INFO] 10.244.0.8:48091 - 57259 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145062s
	[INFO] 10.244.0.8:48091 - 23977 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000165056s
	[INFO] 10.244.0.8:37162 - 64064 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000065297s
	[INFO] 10.244.0.8:37162 - 21830 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000026607s
	[INFO] 10.244.0.8:49381 - 50366 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030695s
	[INFO] 10.244.0.8:49381 - 27323 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000021558s
	[INFO] 10.244.0.8:58214 - 54981 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027148s
	[INFO] 10.244.0.8:58214 - 60359 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000020888s
	[INFO] 10.244.0.8:53807 - 54883 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000026107s
	[INFO] 10.244.0.8:53807 - 64865 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000029304s
	[INFO] 10.244.0.22:44007 - 57426 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000643433s
	[INFO] 10.244.0.22:34435 - 64848 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000478536s
	[INFO] 10.244.0.22:56700 - 49705 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000679509s
	[INFO] 10.244.0.22:39456 - 13856 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106406s
	[INFO] 10.244.0.22:40810 - 33356 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168688s
	[INFO] 10.244.0.22:54214 - 40721 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000310371s
	[INFO] 10.244.0.22:55579 - 26038 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000619934s
	[INFO] 10.244.0.22:54484 - 9458 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000790172s
	[INFO] 10.244.0.25:46992 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000422766s
	[INFO] 10.244.0.25:36367 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148274s
	
	
	==> describe nodes <==
	Name:               addons-018825
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-018825
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=addons-018825
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T14_22_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-018825
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:22:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-018825
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:27:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:25:46 +0000   Fri, 19 Jul 2024 14:22:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:25:46 +0000   Fri, 19 Jul 2024 14:22:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:25:46 +0000   Fri, 19 Jul 2024 14:22:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:25:46 +0000   Fri, 19 Jul 2024 14:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-018825
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 98d28a50f0df4400be945283e1dcebdb
	  System UUID:                98d28a50-f0df-4400-be94-5283e1dcebdb
	  Boot ID:                    c79de801-425e-4495-a3c6-178016b9936c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-xms8k         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-jcn9w                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  headlamp                    headlamp-7867546754-zbbqp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-7db6d8ff4d-88nlf                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m20s
	  kube-system                 etcd-addons-018825                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-apiserver-addons-018825             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-controller-manager-addons-018825    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-proxy-qkf6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-scheduler-addons-018825             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 metrics-server-c59844bb4-p76dw           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  yakd-dashboard              yakd-dashboard-799879c74f-hw6vk          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node addons-018825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node addons-018825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node addons-018825 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m34s                  kubelet          Node addons-018825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s                  kubelet          Node addons-018825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s                  kubelet          Node addons-018825 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m33s                  kubelet          Node addons-018825 status is now: NodeReady
	  Normal  RegisteredNode           4m21s                  node-controller  Node addons-018825 event: Registered Node addons-018825 in Controller
	
	
	==> dmesg <==
	[ +14.020982] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.255562] systemd-fstab-generator[1528]: Ignoring "noauto" option for root device
	[Jul19 14:23] kauditd_printk_skb: 101 callbacks suppressed
	[  +5.016150] kauditd_printk_skb: 126 callbacks suppressed
	[  +7.481033] kauditd_printk_skb: 98 callbacks suppressed
	[ +20.002413] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.001009] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.350943] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.050033] kauditd_printk_skb: 2 callbacks suppressed
	[Jul19 14:24] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.283871] kauditd_printk_skb: 50 callbacks suppressed
	[  +9.225814] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.662219] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.063069] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.605583] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.614091] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.152793] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.471359] kauditd_printk_skb: 3 callbacks suppressed
	[Jul19 14:25] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.043853] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.803530] kauditd_printk_skb: 35 callbacks suppressed
	[Jul19 14:26] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.449891] kauditd_printk_skb: 33 callbacks suppressed
	[Jul19 14:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.940695] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860] <==
	{"level":"warn","ts":"2024-07-19T14:24:18.775103Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.432035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85649"}
	{"level":"info","ts":"2024-07-19T14:24:18.776331Z","caller":"traceutil/trace.go:171","msg":"trace[1068217016] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1144; }","duration":"115.441773ms","start":"2024-07-19T14:24:18.660647Z","end":"2024-07-19T14:24:18.776089Z","steps":["trace[1068217016] 'agreement among raft nodes before linearized reading'  (duration: 114.350894ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:24:23.898179Z","caller":"traceutil/trace.go:171","msg":"trace[827142375] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"107.693303ms","start":"2024-07-19T14:24:23.790471Z","end":"2024-07-19T14:24:23.898164Z","steps":["trace[827142375] 'process raft request'  (duration: 107.367229ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:24:33.793909Z","caller":"traceutil/trace.go:171","msg":"trace[1475961601] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"293.577313ms","start":"2024-07-19T14:24:33.500304Z","end":"2024-07-19T14:24:33.793881Z","steps":["trace[1475961601] 'read index received'  (duration: 293.447167ms)","trace[1475961601] 'applied index is now lower than readState.Index'  (duration: 129.686µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T14:24:33.794201Z","caller":"traceutil/trace.go:171","msg":"trace[2119557611] transaction","detail":"{read_only:false; response_revision:1263; number_of_response:1; }","duration":"456.508515ms","start":"2024-07-19T14:24:33.337679Z","end":"2024-07-19T14:24:33.794187Z","steps":["trace[2119557611] 'process raft request'  (duration: 456.115958ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:24:33.794332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:24:33.33766Z","time spent":"456.570276ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-2ghzlkwnnipkmrn5gyeinw4bvu\" mod_revision:1175 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-2ghzlkwnnipkmrn5gyeinw4bvu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-2ghzlkwnnipkmrn5gyeinw4bvu\" > >"}
	{"level":"warn","ts":"2024-07-19T14:24:33.794473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.184984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T14:24:33.794552Z","caller":"traceutil/trace.go:171","msg":"trace[1897348490] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1263; }","duration":"294.302313ms","start":"2024-07-19T14:24:33.500243Z","end":"2024-07-19T14:24:33.794546Z","steps":["trace[1897348490] 'agreement among raft nodes before linearized reading'  (duration: 294.205552ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:24:35.833445Z","caller":"traceutil/trace.go:171","msg":"trace[1946019562] linearizableReadLoop","detail":"{readStateIndex:1316; appliedIndex:1315; }","duration":"307.492572ms","start":"2024-07-19T14:24:35.525938Z","end":"2024-07-19T14:24:35.83343Z","steps":["trace[1946019562] 'read index received'  (duration: 307.177936ms)","trace[1946019562] 'applied index is now lower than readState.Index'  (duration: 313.959µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T14:24:35.834121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.179452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88497"}
	{"level":"info","ts":"2024-07-19T14:24:35.834183Z","caller":"traceutil/trace.go:171","msg":"trace[276865736] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1277; }","duration":"308.275482ms","start":"2024-07-19T14:24:35.525896Z","end":"2024-07-19T14:24:35.834172Z","steps":["trace[276865736] 'agreement among raft nodes before linearized reading'  (duration: 307.963159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:24:35.834377Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:24:35.525882Z","time spent":"308.322884ms","remote":"127.0.0.1:44216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":19,"response size":88519,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-07-19T14:24:35.835042Z","caller":"traceutil/trace.go:171","msg":"trace[1836533746] transaction","detail":"{read_only:false; response_revision:1277; number_of_response:1; }","duration":"352.349393ms","start":"2024-07-19T14:24:35.482681Z","end":"2024-07-19T14:24:35.835031Z","steps":["trace[1836533746] 'process raft request'  (duration: 350.47517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:24:35.83514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:24:35.482666Z","time spent":"352.426108ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-018825\" mod_revision:1199 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-018825\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-018825\" > >"}
	{"level":"info","ts":"2024-07-19T14:24:47.716917Z","caller":"traceutil/trace.go:171","msg":"trace[1860969799] transaction","detail":"{read_only:false; response_revision:1407; number_of_response:1; }","duration":"165.098453ms","start":"2024-07-19T14:24:47.551781Z","end":"2024-07-19T14:24:47.71688Z","steps":["trace[1860969799] 'process raft request'  (duration: 164.932797ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:25:27.38522Z","caller":"traceutil/trace.go:171","msg":"trace[1381445707] transaction","detail":"{read_only:false; response_revision:1594; number_of_response:1; }","duration":"354.534015ms","start":"2024-07-19T14:25:27.030666Z","end":"2024-07-19T14:25:27.3852Z","steps":["trace[1381445707] 'process raft request'  (duration: 354.383611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:25:27.38546Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:25:27.030652Z","time spent":"354.645607ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1572 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-07-19T14:25:27.385672Z","caller":"traceutil/trace.go:171","msg":"trace[49061208] linearizableReadLoop","detail":"{readStateIndex:1647; appliedIndex:1647; }","duration":"256.615203ms","start":"2024-07-19T14:25:27.129041Z","end":"2024-07-19T14:25:27.385656Z","steps":["trace[49061208] 'read index received'  (duration: 256.611768ms)","trace[49061208] 'applied index is now lower than readState.Index'  (duration: 2.697µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T14:25:27.385827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.77681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T14:25:27.385855Z","caller":"traceutil/trace.go:171","msg":"trace[571064593] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1594; }","duration":"256.829999ms","start":"2024-07-19T14:25:27.129016Z","end":"2024-07-19T14:25:27.385846Z","steps":["trace[571064593] 'agreement among raft nodes before linearized reading'  (duration: 256.742365ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:25:27.386234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.828289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-19T14:25:27.386265Z","caller":"traceutil/trace.go:171","msg":"trace[718249119] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1595; }","duration":"116.885818ms","start":"2024-07-19T14:25:27.26937Z","end":"2024-07-19T14:25:27.386256Z","steps":["trace[718249119] 'agreement among raft nodes before linearized reading'  (duration: 116.741526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:25:27.386701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.619652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T14:25:27.386745Z","caller":"traceutil/trace.go:171","msg":"trace[333550768] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1595; }","duration":"106.697029ms","start":"2024-07-19T14:25:27.280041Z","end":"2024-07-19T14:25:27.386738Z","steps":["trace[333550768] 'agreement among raft nodes before linearized reading'  (duration: 106.603151ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:25:57.619831Z","caller":"traceutil/trace.go:171","msg":"trace[2132885807] transaction","detail":"{read_only:false; response_revision:1691; number_of_response:1; }","duration":"165.692667ms","start":"2024-07-19T14:25:57.45411Z","end":"2024-07-19T14:25:57.619803Z","steps":["trace[2132885807] 'process raft request'  (duration: 165.592658ms)"],"step_count":1}
	
	
	==> gcp-auth [20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7] <==
	2024/07/19 14:24:24 GCP Auth Webhook started!
	2024/07/19 14:24:29 Ready to marshal response ...
	2024/07/19 14:24:29 Ready to write response ...
	2024/07/19 14:24:29 Ready to marshal response ...
	2024/07/19 14:24:29 Ready to write response ...
	2024/07/19 14:24:29 Ready to marshal response ...
	2024/07/19 14:24:29 Ready to write response ...
	2024/07/19 14:24:33 Ready to marshal response ...
	2024/07/19 14:24:33 Ready to write response ...
	2024/07/19 14:24:39 Ready to marshal response ...
	2024/07/19 14:24:39 Ready to write response ...
	2024/07/19 14:24:42 Ready to marshal response ...
	2024/07/19 14:24:42 Ready to write response ...
	2024/07/19 14:25:04 Ready to marshal response ...
	2024/07/19 14:25:04 Ready to write response ...
	2024/07/19 14:25:04 Ready to marshal response ...
	2024/07/19 14:25:04 Ready to write response ...
	2024/07/19 14:25:17 Ready to marshal response ...
	2024/07/19 14:25:17 Ready to write response ...
	2024/07/19 14:25:19 Ready to marshal response ...
	2024/07/19 14:25:19 Ready to write response ...
	2024/07/19 14:25:52 Ready to marshal response ...
	2024/07/19 14:25:52 Ready to write response ...
	2024/07/19 14:27:06 Ready to marshal response ...
	2024/07/19 14:27:06 Ready to write response ...
	
	
	==> kernel <==
	 14:27:17 up 5 min,  0 users,  load average: 0.84, 1.06, 0.53
	Linux addons-018825 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767] <==
	E0719 14:24:54.149100       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 14:24:54.149965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 14:24:58.156717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 14:24:58.156838       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0719 14:24:58.156917       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.59.41:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.59.41:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	I0719 14:24:58.180242       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0719 14:24:58.188609       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	E0719 14:25:33.207702       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0719 14:25:33.824922       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 14:26:10.340128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.340294       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.373687       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.374125       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.385461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.385570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.394782       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.394875       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.433055       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.433582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0719 14:26:11.386917       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 14:26:11.433050       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 14:26:11.448207       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0719 14:27:06.852865       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.176.29"}
	
	
	==> kube-controller-manager [b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b] <==
	W0719 14:26:27.181007       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:27.181036       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:26:31.553603       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:31.553703       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:26:31.591788       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:31.591852       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:26:42.556107       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:42.556163       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:26:46.051844       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:46.051877       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:26:53.278326       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:53.278376       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:26:55.311389       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:26:55.311441       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 14:27:06.693761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.579069ms"
	I0719 14:27:06.723817       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="29.595555ms"
	I0719 14:27:06.723955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="41.436µs"
	I0719 14:27:06.741198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="46.251µs"
	I0719 14:27:08.733971       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0719 14:27:08.737847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="3.573µs"
	I0719 14:27:08.743199       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0719 14:27:10.856750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="12.328521ms"
	I0719 14:27:10.857005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="30.516µs"
	W0719 14:27:16.291221       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:27:16.291265       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f5795a1085] <==
	I0719 14:22:57.511109       1 server_linux.go:69] "Using iptables proxy"
	I0719 14:22:57.526288       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0719 14:22:57.635469       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 14:22:57.635578       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 14:22:57.635596       1 server_linux.go:165] "Using iptables Proxier"
	I0719 14:22:57.642809       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 14:22:57.643032       1 server.go:872] "Version info" version="v1.30.3"
	I0719 14:22:57.643045       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:22:57.657832       1 config.go:192] "Starting service config controller"
	I0719 14:22:57.657851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 14:22:57.657892       1 config.go:101] "Starting endpoint slice config controller"
	I0719 14:22:57.657897       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 14:22:57.660150       1 config.go:319] "Starting node config controller"
	I0719 14:22:57.660161       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 14:22:57.758462       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 14:22:57.758565       1 shared_informer.go:320] Caches are synced for service config
	I0719 14:22:57.760608       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a] <==
	E0719 14:22:40.394089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:22:40.394118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 14:22:40.394229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:22:40.394261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:22:40.394347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 14:22:40.394391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 14:22:40.394874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 14:22:40.394979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 14:22:41.219775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:22:41.219891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:22:41.270371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 14:22:41.270587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 14:22:41.370297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 14:22:41.370326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 14:22:41.423072       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 14:22:41.423183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 14:22:41.511864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 14:22:41.511922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 14:22:41.584885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:22:41.584973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:22:41.602259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 14:22:41.602734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 14:22:41.640759       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 14:22:41.640843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0719 14:22:41.983993       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.718980    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba367f1-c7ae-4bf5-bc2e-3bbd75010f18" containerName="csi-snapshotter"
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.718984    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="96cfac6f-b2b8-4d18-af59-ac7acd7ba117" containerName="task-pv-container"
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.719042    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="c715f347-d341-4f6d-a2e8-ad1d7984ea15" containerName="csi-resizer"
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.719049    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="324e961d-ccdf-4cac-9736-a5a22192761c" containerName="csi-attacher"
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.719054    1271 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba367f1-c7ae-4bf5-bc2e-3bbd75010f18" containerName="hostpath"
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.829015    1271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6-gcp-creds\") pod \"hello-world-app-6778b5fc9f-xms8k\" (UID: \"9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6\") " pod="default/hello-world-app-6778b5fc9f-xms8k"
	Jul 19 14:27:06 addons-018825 kubelet[1271]: I0719 14:27:06.829083    1271 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gf5c\" (UniqueName: \"kubernetes.io/projected/9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6-kube-api-access-5gf5c\") pod \"hello-world-app-6778b5fc9f-xms8k\" (UID: \"9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6\") " pod="default/hello-world-app-6778b5fc9f-xms8k"
	Jul 19 14:27:07 addons-018825 kubelet[1271]: I0719 14:27:07.799311    1271 scope.go:117] "RemoveContainer" containerID="5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13"
	Jul 19 14:27:07 addons-018825 kubelet[1271]: I0719 14:27:07.819635    1271 scope.go:117] "RemoveContainer" containerID="5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13"
	Jul 19 14:27:07 addons-018825 kubelet[1271]: E0719 14:27:07.820335    1271 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13\": container with ID starting with 5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13 not found: ID does not exist" containerID="5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13"
	Jul 19 14:27:07 addons-018825 kubelet[1271]: I0719 14:27:07.820384    1271 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13"} err="failed to get container status \"5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13\": rpc error: code = NotFound desc = could not find container \"5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13\": container with ID starting with 5a85dd9708bfea497208f7a3607da7a96ebed6f10bcfd2140de3fe200eb43c13 not found: ID does not exist"
	Jul 19 14:27:07 addons-018825 kubelet[1271]: I0719 14:27:07.847435    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkkjl\" (UniqueName: \"kubernetes.io/projected/543d1957-29b4-4f11-a3ef-a50baed9131f-kube-api-access-bkkjl\") pod \"543d1957-29b4-4f11-a3ef-a50baed9131f\" (UID: \"543d1957-29b4-4f11-a3ef-a50baed9131f\") "
	Jul 19 14:27:07 addons-018825 kubelet[1271]: I0719 14:27:07.850879    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543d1957-29b4-4f11-a3ef-a50baed9131f-kube-api-access-bkkjl" (OuterVolumeSpecName: "kube-api-access-bkkjl") pod "543d1957-29b4-4f11-a3ef-a50baed9131f" (UID: "543d1957-29b4-4f11-a3ef-a50baed9131f"). InnerVolumeSpecName "kube-api-access-bkkjl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 14:27:07 addons-018825 kubelet[1271]: I0719 14:27:07.948760    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bkkjl\" (UniqueName: \"kubernetes.io/projected/543d1957-29b4-4f11-a3ef-a50baed9131f-kube-api-access-bkkjl\") on node \"addons-018825\" DevicePath \"\""
	Jul 19 14:27:08 addons-018825 kubelet[1271]: I0719 14:27:08.893874    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543d1957-29b4-4f11-a3ef-a50baed9131f" path="/var/lib/kubelet/pods/543d1957-29b4-4f11-a3ef-a50baed9131f/volumes"
	Jul 19 14:27:08 addons-018825 kubelet[1271]: I0719 14:27:08.894309    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d08ce95-d2e1-4ba5-beae-c238a9ce51ed" path="/var/lib/kubelet/pods/8d08ce95-d2e1-4ba5-beae-c238a9ce51ed/volumes"
	Jul 19 14:27:08 addons-018825 kubelet[1271]: I0719 14:27:08.894824    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f936357e-6fbc-4e23-ba57-00add2837377" path="/var/lib/kubelet/pods/f936357e-6fbc-4e23-ba57-00add2837377/volumes"
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.079666    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ebd07cd6-48e5-4937-87f4-710c87412ac4-webhook-cert\") pod \"ebd07cd6-48e5-4937-87f4-710c87412ac4\" (UID: \"ebd07cd6-48e5-4937-87f4-710c87412ac4\") "
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.079719    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g788k\" (UniqueName: \"kubernetes.io/projected/ebd07cd6-48e5-4937-87f4-710c87412ac4-kube-api-access-g788k\") pod \"ebd07cd6-48e5-4937-87f4-710c87412ac4\" (UID: \"ebd07cd6-48e5-4937-87f4-710c87412ac4\") "
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.083760    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd07cd6-48e5-4937-87f4-710c87412ac4-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ebd07cd6-48e5-4937-87f4-710c87412ac4" (UID: "ebd07cd6-48e5-4937-87f4-710c87412ac4"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.085325    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd07cd6-48e5-4937-87f4-710c87412ac4-kube-api-access-g788k" (OuterVolumeSpecName: "kube-api-access-g788k") pod "ebd07cd6-48e5-4937-87f4-710c87412ac4" (UID: "ebd07cd6-48e5-4937-87f4-710c87412ac4"). InnerVolumeSpecName "kube-api-access-g788k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.180841    1271 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ebd07cd6-48e5-4937-87f4-710c87412ac4-webhook-cert\") on node \"addons-018825\" DevicePath \"\""
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.180872    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g788k\" (UniqueName: \"kubernetes.io/projected/ebd07cd6-48e5-4937-87f4-710c87412ac4-kube-api-access-g788k\") on node \"addons-018825\" DevicePath \"\""
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.842597    1271 scope.go:117] "RemoveContainer" containerID="499f07b362c027d65ae899dccd7865acd64cc990b1914ff6d0c6769066f90fd7"
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.893321    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd07cd6-48e5-4937-87f4-710c87412ac4" path="/var/lib/kubelet/pods/ebd07cd6-48e5-4937-87f4-710c87412ac4/volumes"
	
	
	==> storage-provisioner [822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d] <==
	I0719 14:23:05.113118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 14:23:05.136225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 14:23:05.136296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 14:23:05.146406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 14:23:05.146666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-018825_a167b234-6848-455d-82c0-996c63c3021d!
	I0719 14:23:05.153022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b8c82fa-361f-4961-849c-fa9007c57d08", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-018825_a167b234-6848-455d-82c0-996c63c3021d became leader
	I0719 14:23:05.255872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-018825_a167b234-6848-455d-82c0-996c63c3021d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-018825 -n addons-018825
helpers_test.go:261: (dbg) Run:  kubectl --context addons-018825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (353.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.612758ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-p76dw" [4f3616b2-3dcb-414f-930a-494df347f25f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005667215s
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (60.919039ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (55.668931ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (55.385423ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (51.765731ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (95.727744ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 2m8.443877905s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (76.661882ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 2m30.947444564s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (62.272117ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 2m45.305332423s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (60.268019ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 3m35.628582132s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (60.773988ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 4m36.630294354s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (64.21801ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 6m5.696470952s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (64.429664ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 6m40.249544468s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-018825 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-018825 top pods -n kube-system: exit status 1 (61.137433ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-88nlf, age: 7m29.20734307s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-018825 -n addons-018825
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-018825 logs -n 25: (1.441486152s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-819425                                                                     | download-only-819425 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-944621                                                                     | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-905246                                                                     | download-only-905246 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| delete  | -p download-only-819425                                                                     | download-only-819425 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-598622 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC |                     |
	|         | binary-mirror-598622                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-598622                                                                     | binary-mirror-598622 | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:22 UTC |
	| addons  | disable dashboard -p                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC |                     |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC |                     |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-018825 --wait=true                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:22 UTC | 19 Jul 24 14:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | -p addons-018825                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | -p addons-018825                                                                            |                      |         |         |                     |                     |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-018825 ip                                                                            | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC | 19 Jul 24 14:24 UTC |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-018825 ssh curl -s                                                                   | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:25 UTC | 19 Jul 24 14:25 UTC |
	|         | addons-018825                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-018825 ssh cat                                                                       | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:25 UTC | 19 Jul 24 14:25 UTC |
	|         | /opt/local-path-provisioner/pvc-b22e2d8b-ef50-4e0e-ac1c-eda671cc595d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:25 UTC | 19 Jul 24 14:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-018825 addons                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:26 UTC | 19 Jul 24 14:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-018825 addons                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:26 UTC | 19 Jul 24 14:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-018825 ip                                                                            | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:27 UTC | 19 Jul 24 14:27 UTC |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:27 UTC | 19 Jul 24 14:27 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-018825 addons disable                                                                | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:27 UTC | 19 Jul 24 14:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-018825 addons                                                                        | addons-018825        | jenkins | v1.33.1 | 19 Jul 24 14:30 UTC | 19 Jul 24 14:30 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:22:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:22:02.276134   12169 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:22:02.276405   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:22:02.276415   12169 out.go:304] Setting ErrFile to fd 2...
	I0719 14:22:02.276419   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:22:02.276587   12169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:22:02.277153   12169 out.go:298] Setting JSON to false
	I0719 14:22:02.278021   12169 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":268,"bootTime":1721398654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:22:02.278079   12169 start.go:139] virtualization: kvm guest
	I0719 14:22:02.279993   12169 out.go:177] * [addons-018825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:22:02.281615   12169 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:22:02.281667   12169 notify.go:220] Checking for updates...
	I0719 14:22:02.283972   12169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:22:02.285155   12169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:22:02.286404   12169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:22:02.287663   12169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:22:02.288966   12169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:22:02.290429   12169 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:22:02.323929   12169 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 14:22:02.325226   12169 start.go:297] selected driver: kvm2
	I0719 14:22:02.325253   12169 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:22:02.325265   12169 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:22:02.325974   12169 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:22:02.326043   12169 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:22:02.340475   12169 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:22:02.340533   12169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:22:02.340770   12169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:22:02.340826   12169 cni.go:84] Creating CNI manager for ""
	I0719 14:22:02.340839   12169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:22:02.340848   12169 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 14:22:02.340909   12169 start.go:340] cluster config:
	{Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:22:02.340997   12169 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:22:02.342880   12169 out.go:177] * Starting "addons-018825" primary control-plane node in "addons-018825" cluster
	I0719 14:22:02.344310   12169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:22:02.344349   12169 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:22:02.344357   12169 cache.go:56] Caching tarball of preloaded images
	I0719 14:22:02.344443   12169 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:22:02.344452   12169 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:22:02.344721   12169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/config.json ...
	I0719 14:22:02.344738   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/config.json: {Name:mk2182d403a7be310714d6cedc0644b0c733d792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:02.344868   12169 start.go:360] acquireMachinesLock for addons-018825: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:22:02.344913   12169 start.go:364] duration metric: took 33.673µs to acquireMachinesLock for "addons-018825"
	I0719 14:22:02.344930   12169 start.go:93] Provisioning new machine with config: &{Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:22:02.344975   12169 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 14:22:02.346464   12169 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 14:22:02.346577   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:02.346614   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:02.360713   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I0719 14:22:02.361173   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:02.361799   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:02.361816   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:02.362098   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:02.362280   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:02.362422   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:02.362561   12169 start.go:159] libmachine.API.Create for "addons-018825" (driver="kvm2")
	I0719 14:22:02.362589   12169 client.go:168] LocalClient.Create starting
	I0719 14:22:02.362630   12169 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:22:02.540029   12169 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:22:02.643799   12169 main.go:141] libmachine: Running pre-create checks...
	I0719 14:22:02.643824   12169 main.go:141] libmachine: (addons-018825) Calling .PreCreateCheck
	I0719 14:22:02.644334   12169 main.go:141] libmachine: (addons-018825) Calling .GetConfigRaw
	I0719 14:22:02.644824   12169 main.go:141] libmachine: Creating machine...
	I0719 14:22:02.644838   12169 main.go:141] libmachine: (addons-018825) Calling .Create
	I0719 14:22:02.644991   12169 main.go:141] libmachine: (addons-018825) Creating KVM machine...
	I0719 14:22:02.646186   12169 main.go:141] libmachine: (addons-018825) DBG | found existing default KVM network
	I0719 14:22:02.646897   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:02.646768   12191 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0719 14:22:02.646975   12169 main.go:141] libmachine: (addons-018825) DBG | created network xml: 
	I0719 14:22:02.646996   12169 main.go:141] libmachine: (addons-018825) DBG | <network>
	I0719 14:22:02.647004   12169 main.go:141] libmachine: (addons-018825) DBG |   <name>mk-addons-018825</name>
	I0719 14:22:02.647016   12169 main.go:141] libmachine: (addons-018825) DBG |   <dns enable='no'/>
	I0719 14:22:02.647023   12169 main.go:141] libmachine: (addons-018825) DBG |   
	I0719 14:22:02.647030   12169 main.go:141] libmachine: (addons-018825) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 14:22:02.647041   12169 main.go:141] libmachine: (addons-018825) DBG |     <dhcp>
	I0719 14:22:02.647049   12169 main.go:141] libmachine: (addons-018825) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 14:22:02.647059   12169 main.go:141] libmachine: (addons-018825) DBG |     </dhcp>
	I0719 14:22:02.647064   12169 main.go:141] libmachine: (addons-018825) DBG |   </ip>
	I0719 14:22:02.647070   12169 main.go:141] libmachine: (addons-018825) DBG |   
	I0719 14:22:02.647077   12169 main.go:141] libmachine: (addons-018825) DBG | </network>
	I0719 14:22:02.647085   12169 main.go:141] libmachine: (addons-018825) DBG | 
	I0719 14:22:02.652713   12169 main.go:141] libmachine: (addons-018825) DBG | trying to create private KVM network mk-addons-018825 192.168.39.0/24...
	I0719 14:22:02.718503   12169 main.go:141] libmachine: (addons-018825) DBG | private KVM network mk-addons-018825 192.168.39.0/24 created
	I0719 14:22:02.718534   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:02.718467   12191 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:22:02.718553   12169 main.go:141] libmachine: (addons-018825) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825 ...
	I0719 14:22:02.718567   12169 main.go:141] libmachine: (addons-018825) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:22:02.718694   12169 main.go:141] libmachine: (addons-018825) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:22:02.973162   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:02.973048   12191 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa...
	I0719 14:22:03.039480   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:03.039347   12191 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/addons-018825.rawdisk...
	I0719 14:22:03.039517   12169 main.go:141] libmachine: (addons-018825) DBG | Writing magic tar header
	I0719 14:22:03.039596   12169 main.go:141] libmachine: (addons-018825) DBG | Writing SSH key tar header
	I0719 14:22:03.039642   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:03.039507   12191 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825 ...
	I0719 14:22:03.039674   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825 (perms=drwx------)
	I0719 14:22:03.039693   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825
	I0719 14:22:03.039704   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:22:03.039711   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:22:03.039718   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:22:03.039728   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:22:03.039743   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:22:03.039761   12169 main.go:141] libmachine: (addons-018825) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:22:03.039776   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:22:03.039784   12169 main.go:141] libmachine: (addons-018825) Creating domain...
	I0719 14:22:03.039805   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:22:03.039837   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:22:03.039850   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:22:03.039859   12169 main.go:141] libmachine: (addons-018825) DBG | Checking permissions on dir: /home
	I0719 14:22:03.039872   12169 main.go:141] libmachine: (addons-018825) DBG | Skipping /home - not owner
	I0719 14:22:03.041003   12169 main.go:141] libmachine: (addons-018825) define libvirt domain using xml: 
	I0719 14:22:03.041023   12169 main.go:141] libmachine: (addons-018825) <domain type='kvm'>
	I0719 14:22:03.041033   12169 main.go:141] libmachine: (addons-018825)   <name>addons-018825</name>
	I0719 14:22:03.041040   12169 main.go:141] libmachine: (addons-018825)   <memory unit='MiB'>4000</memory>
	I0719 14:22:03.041060   12169 main.go:141] libmachine: (addons-018825)   <vcpu>2</vcpu>
	I0719 14:22:03.041074   12169 main.go:141] libmachine: (addons-018825)   <features>
	I0719 14:22:03.041098   12169 main.go:141] libmachine: (addons-018825)     <acpi/>
	I0719 14:22:03.041115   12169 main.go:141] libmachine: (addons-018825)     <apic/>
	I0719 14:22:03.041121   12169 main.go:141] libmachine: (addons-018825)     <pae/>
	I0719 14:22:03.041127   12169 main.go:141] libmachine: (addons-018825)     
	I0719 14:22:03.041132   12169 main.go:141] libmachine: (addons-018825)   </features>
	I0719 14:22:03.041140   12169 main.go:141] libmachine: (addons-018825)   <cpu mode='host-passthrough'>
	I0719 14:22:03.041145   12169 main.go:141] libmachine: (addons-018825)   
	I0719 14:22:03.041152   12169 main.go:141] libmachine: (addons-018825)   </cpu>
	I0719 14:22:03.041164   12169 main.go:141] libmachine: (addons-018825)   <os>
	I0719 14:22:03.041174   12169 main.go:141] libmachine: (addons-018825)     <type>hvm</type>
	I0719 14:22:03.041182   12169 main.go:141] libmachine: (addons-018825)     <boot dev='cdrom'/>
	I0719 14:22:03.041197   12169 main.go:141] libmachine: (addons-018825)     <boot dev='hd'/>
	I0719 14:22:03.041207   12169 main.go:141] libmachine: (addons-018825)     <bootmenu enable='no'/>
	I0719 14:22:03.041211   12169 main.go:141] libmachine: (addons-018825)   </os>
	I0719 14:22:03.041217   12169 main.go:141] libmachine: (addons-018825)   <devices>
	I0719 14:22:03.041224   12169 main.go:141] libmachine: (addons-018825)     <disk type='file' device='cdrom'>
	I0719 14:22:03.041232   12169 main.go:141] libmachine: (addons-018825)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/boot2docker.iso'/>
	I0719 14:22:03.041239   12169 main.go:141] libmachine: (addons-018825)       <target dev='hdc' bus='scsi'/>
	I0719 14:22:03.041263   12169 main.go:141] libmachine: (addons-018825)       <readonly/>
	I0719 14:22:03.041282   12169 main.go:141] libmachine: (addons-018825)     </disk>
	I0719 14:22:03.041293   12169 main.go:141] libmachine: (addons-018825)     <disk type='file' device='disk'>
	I0719 14:22:03.041306   12169 main.go:141] libmachine: (addons-018825)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:22:03.041322   12169 main.go:141] libmachine: (addons-018825)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/addons-018825.rawdisk'/>
	I0719 14:22:03.041331   12169 main.go:141] libmachine: (addons-018825)       <target dev='hda' bus='virtio'/>
	I0719 14:22:03.041337   12169 main.go:141] libmachine: (addons-018825)     </disk>
	I0719 14:22:03.041342   12169 main.go:141] libmachine: (addons-018825)     <interface type='network'>
	I0719 14:22:03.041349   12169 main.go:141] libmachine: (addons-018825)       <source network='mk-addons-018825'/>
	I0719 14:22:03.041355   12169 main.go:141] libmachine: (addons-018825)       <model type='virtio'/>
	I0719 14:22:03.041366   12169 main.go:141] libmachine: (addons-018825)     </interface>
	I0719 14:22:03.041385   12169 main.go:141] libmachine: (addons-018825)     <interface type='network'>
	I0719 14:22:03.041401   12169 main.go:141] libmachine: (addons-018825)       <source network='default'/>
	I0719 14:22:03.041412   12169 main.go:141] libmachine: (addons-018825)       <model type='virtio'/>
	I0719 14:22:03.041419   12169 main.go:141] libmachine: (addons-018825)     </interface>
	I0719 14:22:03.041428   12169 main.go:141] libmachine: (addons-018825)     <serial type='pty'>
	I0719 14:22:03.041436   12169 main.go:141] libmachine: (addons-018825)       <target port='0'/>
	I0719 14:22:03.041441   12169 main.go:141] libmachine: (addons-018825)     </serial>
	I0719 14:22:03.041448   12169 main.go:141] libmachine: (addons-018825)     <console type='pty'>
	I0719 14:22:03.041456   12169 main.go:141] libmachine: (addons-018825)       <target type='serial' port='0'/>
	I0719 14:22:03.041470   12169 main.go:141] libmachine: (addons-018825)     </console>
	I0719 14:22:03.041488   12169 main.go:141] libmachine: (addons-018825)     <rng model='virtio'>
	I0719 14:22:03.041499   12169 main.go:141] libmachine: (addons-018825)       <backend model='random'>/dev/random</backend>
	I0719 14:22:03.041510   12169 main.go:141] libmachine: (addons-018825)     </rng>
	I0719 14:22:03.041517   12169 main.go:141] libmachine: (addons-018825)     
	I0719 14:22:03.041525   12169 main.go:141] libmachine: (addons-018825)     
	I0719 14:22:03.041537   12169 main.go:141] libmachine: (addons-018825)   </devices>
	I0719 14:22:03.041543   12169 main.go:141] libmachine: (addons-018825) </domain>
	I0719 14:22:03.041550   12169 main.go:141] libmachine: (addons-018825) 
	I0719 14:22:03.048136   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:ec:9c:95 in network default
	I0719 14:22:03.048626   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:03.048640   12169 main.go:141] libmachine: (addons-018825) Ensuring networks are active...
	I0719 14:22:03.049275   12169 main.go:141] libmachine: (addons-018825) Ensuring network default is active
	I0719 14:22:03.049580   12169 main.go:141] libmachine: (addons-018825) Ensuring network mk-addons-018825 is active
	I0719 14:22:03.050147   12169 main.go:141] libmachine: (addons-018825) Getting domain xml...
	I0719 14:22:03.050961   12169 main.go:141] libmachine: (addons-018825) Creating domain...
	I0719 14:22:04.436146   12169 main.go:141] libmachine: (addons-018825) Waiting to get IP...
	I0719 14:22:04.436961   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:04.437516   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:04.437544   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:04.437467   12191 retry.go:31] will retry after 304.107643ms: waiting for machine to come up
	I0719 14:22:04.743020   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:04.743451   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:04.743479   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:04.743428   12191 retry.go:31] will retry after 286.459263ms: waiting for machine to come up
	I0719 14:22:05.032070   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:05.032577   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:05.032604   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:05.032534   12191 retry.go:31] will retry after 373.323599ms: waiting for machine to come up
	I0719 14:22:05.407334   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:05.407834   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:05.407871   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:05.407780   12191 retry.go:31] will retry after 392.760765ms: waiting for machine to come up
	I0719 14:22:05.802339   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:05.802879   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:05.802907   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:05.802833   12191 retry.go:31] will retry after 514.7879ms: waiting for machine to come up
	I0719 14:22:06.319598   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:06.320043   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:06.320074   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:06.319994   12191 retry.go:31] will retry after 719.918001ms: waiting for machine to come up
	I0719 14:22:07.041925   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:07.042283   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:07.042305   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:07.042222   12191 retry.go:31] will retry after 1.100071039s: waiting for machine to come up
	I0719 14:22:08.144199   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:08.144748   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:08.144777   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:08.144697   12191 retry.go:31] will retry after 914.322914ms: waiting for machine to come up
	I0719 14:22:09.060314   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:09.060804   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:09.060834   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:09.060751   12191 retry.go:31] will retry after 1.190064357s: waiting for machine to come up
	I0719 14:22:10.253077   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:10.253473   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:10.253503   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:10.253435   12191 retry.go:31] will retry after 1.875735266s: waiting for machine to come up
	I0719 14:22:12.131268   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:12.131674   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:12.131703   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:12.131642   12191 retry.go:31] will retry after 2.089554021s: waiting for machine to come up
	I0719 14:22:14.223487   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:14.223948   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:14.223975   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:14.223902   12191 retry.go:31] will retry after 3.555218909s: waiting for machine to come up
	I0719 14:22:17.780236   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:17.780590   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:17.780633   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:17.780578   12191 retry.go:31] will retry after 3.539642936s: waiting for machine to come up
	I0719 14:22:21.324156   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:21.324601   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find current IP address of domain addons-018825 in network mk-addons-018825
	I0719 14:22:21.324629   12169 main.go:141] libmachine: (addons-018825) DBG | I0719 14:22:21.324503   12191 retry.go:31] will retry after 4.417103586s: waiting for machine to come up
	I0719 14:22:25.745978   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.746500   12169 main.go:141] libmachine: (addons-018825) Found IP for machine: 192.168.39.100
	I0719 14:22:25.746518   12169 main.go:141] libmachine: (addons-018825) Reserving static IP address...
	I0719 14:22:25.746538   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has current primary IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.746844   12169 main.go:141] libmachine: (addons-018825) DBG | unable to find host DHCP lease matching {name: "addons-018825", mac: "52:54:00:7c:72:1e", ip: "192.168.39.100"} in network mk-addons-018825
	I0719 14:22:25.816418   12169 main.go:141] libmachine: (addons-018825) DBG | Getting to WaitForSSH function...
	I0719 14:22:25.816445   12169 main.go:141] libmachine: (addons-018825) Reserved static IP address: 192.168.39.100
	I0719 14:22:25.816457   12169 main.go:141] libmachine: (addons-018825) Waiting for SSH to be available...
	I0719 14:22:25.819369   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.819751   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:25.819783   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.819913   12169 main.go:141] libmachine: (addons-018825) DBG | Using SSH client type: external
	I0719 14:22:25.819945   12169 main.go:141] libmachine: (addons-018825) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa (-rw-------)
	I0719 14:22:25.819976   12169 main.go:141] libmachine: (addons-018825) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:22:25.819988   12169 main.go:141] libmachine: (addons-018825) DBG | About to run SSH command:
	I0719 14:22:25.820034   12169 main.go:141] libmachine: (addons-018825) DBG | exit 0
	I0719 14:22:25.954287   12169 main.go:141] libmachine: (addons-018825) DBG | SSH cmd err, output: <nil>: 
	I0719 14:22:25.954549   12169 main.go:141] libmachine: (addons-018825) KVM machine creation complete!
	I0719 14:22:25.954850   12169 main.go:141] libmachine: (addons-018825) Calling .GetConfigRaw
	I0719 14:22:25.955507   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:25.955756   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:25.955938   12169 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:22:25.955957   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:25.957182   12169 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:22:25.957197   12169 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:22:25.957205   12169 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:22:25.957215   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:25.959386   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.959683   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:25.959719   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:25.959861   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:25.960013   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:25.960130   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:25.960244   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:25.960407   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:25.960600   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:25.960612   12169 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:22:26.065254   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:22:26.065278   12169 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:22:26.065284   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.067771   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.068055   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.068082   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.068225   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.068384   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.068502   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.068617   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.068760   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.068960   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.068971   12169 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:22:26.174963   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:22:26.175019   12169 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:22:26.175027   12169 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:22:26.175039   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:26.175263   12169 buildroot.go:166] provisioning hostname "addons-018825"
	I0719 14:22:26.175284   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:26.175460   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.177906   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.178251   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.178278   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.178434   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.178602   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.178737   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.178878   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.179127   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.179284   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.179296   12169 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-018825 && echo "addons-018825" | sudo tee /etc/hostname
	I0719 14:22:26.300586   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-018825
	
	I0719 14:22:26.300615   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.303272   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.303604   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.303624   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.303808   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.303991   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.304154   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.304286   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.304425   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.304609   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.304627   12169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-018825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-018825/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-018825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:22:26.418099   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:22:26.418128   12169 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:22:26.418145   12169 buildroot.go:174] setting up certificates
	I0719 14:22:26.418154   12169 provision.go:84] configureAuth start
	I0719 14:22:26.418161   12169 main.go:141] libmachine: (addons-018825) Calling .GetMachineName
	I0719 14:22:26.418397   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:26.420892   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.421219   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.421248   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.421349   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.424153   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.424424   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.424442   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.424578   12169 provision.go:143] copyHostCerts
	I0719 14:22:26.424644   12169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:22:26.424775   12169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:22:26.424838   12169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:22:26.424887   12169 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.addons-018825 san=[127.0.0.1 192.168.39.100 addons-018825 localhost minikube]
	I0719 14:22:26.518450   12169 provision.go:177] copyRemoteCerts
	I0719 14:22:26.518501   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:22:26.518522   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.521034   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.521374   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.521403   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.521571   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.521823   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.521966   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.522109   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:26.604186   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:22:26.627895   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 14:22:26.650869   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 14:22:26.672910   12169 provision.go:87] duration metric: took 254.74155ms to configureAuth
	I0719 14:22:26.672933   12169 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:22:26.673100   12169 config.go:182] Loaded profile config "addons-018825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:22:26.673166   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.675876   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.676214   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.676241   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.676397   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.676579   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.676742   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.676881   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.677028   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:26.677173   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:26.677186   12169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:22:26.950068   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:22:26.950097   12169 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:22:26.950108   12169 main.go:141] libmachine: (addons-018825) Calling .GetURL
	I0719 14:22:26.951314   12169 main.go:141] libmachine: (addons-018825) DBG | Using libvirt version 6000000
	I0719 14:22:26.953393   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.953752   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.953778   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.953932   12169 main.go:141] libmachine: Docker is up and running!
	I0719 14:22:26.953949   12169 main.go:141] libmachine: Reticulating splines...
	I0719 14:22:26.953958   12169 client.go:171] duration metric: took 24.59136072s to LocalClient.Create
	I0719 14:22:26.953987   12169 start.go:167] duration metric: took 24.591425255s to libmachine.API.Create "addons-018825"
	I0719 14:22:26.954000   12169 start.go:293] postStartSetup for "addons-018825" (driver="kvm2")
	I0719 14:22:26.954016   12169 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:22:26.954037   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:26.954279   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:22:26.954302   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:26.956188   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.956453   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:26.956478   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:26.956600   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:26.956760   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:26.956908   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:26.957028   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:27.040706   12169 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:22:27.044709   12169 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:22:27.044729   12169 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:22:27.044808   12169 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:22:27.044832   12169 start.go:296] duration metric: took 90.824275ms for postStartSetup
	I0719 14:22:27.044872   12169 main.go:141] libmachine: (addons-018825) Calling .GetConfigRaw
	I0719 14:22:27.045393   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:27.047621   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.048112   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.048138   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.048376   12169 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/config.json ...
	I0719 14:22:27.048538   12169 start.go:128] duration metric: took 24.703554859s to createHost
	I0719 14:22:27.048558   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:27.050721   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.051147   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.051167   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.051300   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:27.051459   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.051690   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.051800   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:27.051927   12169 main.go:141] libmachine: Using SSH client type: native
	I0719 14:22:27.052075   12169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0719 14:22:27.052084   12169 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:22:27.158568   12169 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721398947.135991007
	
	I0719 14:22:27.158585   12169 fix.go:216] guest clock: 1721398947.135991007
	I0719 14:22:27.158596   12169 fix.go:229] Guest: 2024-07-19 14:22:27.135991007 +0000 UTC Remote: 2024-07-19 14:22:27.048547952 +0000 UTC m=+24.805298864 (delta=87.443055ms)
	I0719 14:22:27.158631   12169 fix.go:200] guest clock delta is within tolerance: 87.443055ms
	I0719 14:22:27.158636   12169 start.go:83] releasing machines lock for "addons-018825", held for 24.813714364s
	I0719 14:22:27.158657   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.158888   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:27.161163   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.161493   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.161519   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.161612   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.162042   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.162184   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:27.162265   12169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:22:27.162317   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:27.162424   12169 ssh_runner.go:195] Run: cat /version.json
	I0719 14:22:27.162445   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:27.164786   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165105   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.165129   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165148   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165251   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:27.165424   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.165544   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:27.165571   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:27.165589   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:27.165669   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:27.165746   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:27.165892   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:27.166028   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:27.166188   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:27.242824   12169 ssh_runner.go:195] Run: systemctl --version
	I0719 14:22:27.268669   12169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:22:27.426870   12169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:22:27.432789   12169 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:22:27.432871   12169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:22:27.448727   12169 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:22:27.448752   12169 start.go:495] detecting cgroup driver to use...
	I0719 14:22:27.448820   12169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:22:27.466498   12169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:22:27.480727   12169 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:22:27.480795   12169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:22:27.493929   12169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:22:27.507495   12169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:22:27.631040   12169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:22:27.793562   12169 docker.go:233] disabling docker service ...
	I0719 14:22:27.793617   12169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:22:27.807466   12169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:22:27.820058   12169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:22:27.943877   12169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:22:28.056561   12169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:22:28.071372   12169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:22:28.088856   12169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:22:28.088909   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.098419   12169 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:22:28.098462   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.108134   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.117555   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.127149   12169 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:22:28.136926   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.146333   12169 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.162397   12169 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:22:28.172300   12169 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:22:28.181209   12169 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:22:28.181256   12169 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:22:28.193769   12169 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:22:28.202445   12169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:22:28.313937   12169 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:22:28.445058   12169 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:22:28.445151   12169 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:22:28.449616   12169 start.go:563] Will wait 60s for crictl version
	I0719 14:22:28.449681   12169 ssh_runner.go:195] Run: which crictl
	I0719 14:22:28.453266   12169 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:22:28.494903   12169 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:22:28.495018   12169 ssh_runner.go:195] Run: crio --version
	I0719 14:22:28.522659   12169 ssh_runner.go:195] Run: crio --version
	I0719 14:22:28.555215   12169 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:22:28.556409   12169 main.go:141] libmachine: (addons-018825) Calling .GetIP
	I0719 14:22:28.559152   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:28.559506   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:28.559531   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:28.559704   12169 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:22:28.563876   12169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:22:28.576553   12169 kubeadm.go:883] updating cluster {Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 14:22:28.576646   12169 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:22:28.576697   12169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:22:28.608090   12169 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 14:22:28.608153   12169 ssh_runner.go:195] Run: which lz4
	I0719 14:22:28.612183   12169 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 14:22:28.616199   12169 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 14:22:28.616225   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 14:22:29.921589   12169 crio.go:462] duration metric: took 1.309435123s to copy over tarball
	I0719 14:22:29.921652   12169 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 14:22:32.195571   12169 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273893497s)
	I0719 14:22:32.195599   12169 crio.go:469] duration metric: took 2.273983793s to extract the tarball
	I0719 14:22:32.195607   12169 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 14:22:32.239714   12169 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:22:32.281809   12169 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:22:32.281836   12169 cache_images.go:84] Images are preloaded, skipping loading
	I0719 14:22:32.281846   12169 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.3 crio true true} ...
	I0719 14:22:32.281983   12169 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-018825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:22:32.282081   12169 ssh_runner.go:195] Run: crio config
	I0719 14:22:32.330345   12169 cni.go:84] Creating CNI manager for ""
	I0719 14:22:32.330366   12169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:22:32.330374   12169 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 14:22:32.330395   12169 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-018825 NodeName:addons-018825 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 14:22:32.330525   12169 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-018825"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 14:22:32.330578   12169 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:22:32.340981   12169 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 14:22:32.341054   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 14:22:32.350979   12169 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 14:22:32.367492   12169 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:22:32.383302   12169 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 14:22:32.398795   12169 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0719 14:22:32.402669   12169 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:22:32.414420   12169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:22:32.530591   12169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:22:32.546502   12169 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825 for IP: 192.168.39.100
	I0719 14:22:32.546530   12169 certs.go:194] generating shared ca certs ...
	I0719 14:22:32.546549   12169 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.546711   12169 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:22:32.662183   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt ...
	I0719 14:22:32.662214   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt: {Name:mk653b526ac38e1c5aaf4a69315f128eb630d254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.662416   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key ...
	I0719 14:22:32.662432   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key: {Name:mkfbbf0641db43c54a468a53e399a0eeead570f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.662515   12169 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:22:32.776929   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt ...
	I0719 14:22:32.776958   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt: {Name:mke9f53eb45f4a92a42e018c67b56e0843ac5842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.777118   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key ...
	I0719 14:22:32.777129   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key: {Name:mk435a7f64e6da5753d93a1289177a6967581df2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.777191   12169 certs.go:256] generating profile certs ...
	I0719 14:22:32.777240   12169 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.key
	I0719 14:22:32.777252   12169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt with IP's: []
	I0719 14:22:32.948611   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt ...
	I0719 14:22:32.948640   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: {Name:mk7b86b310f3139a7b89f9bc57d7c3ff3235d404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.948799   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.key ...
	I0719 14:22:32.948809   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.key: {Name:mk707486ecbbd323681cae7b1b167fb9317eaad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:32.948879   12169 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6
	I0719 14:22:32.948897   12169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100]
	I0719 14:22:33.095901   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6 ...
	I0719 14:22:33.095935   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6: {Name:mk2b86f4561e2ea5008488d825bc65cd1db25651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.096100   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6 ...
	I0719 14:22:33.096112   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6: {Name:mk4ffd02ebd6cf9d73ea940f7afe827800275b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.096174   12169 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt.0d1b89f6 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt
	I0719 14:22:33.096240   12169 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key.0d1b89f6 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key
	I0719 14:22:33.096283   12169 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key
	I0719 14:22:33.096299   12169 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt with IP's: []
	I0719 14:22:33.144582   12169 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt ...
	I0719 14:22:33.144609   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt: {Name:mk72c8b68e207a2c3fed34285c51d2c5714b3abd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.144755   12169 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key ...
	I0719 14:22:33.144764   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key: {Name:mkb139ba4f197ec9147cde88399a4eead3eb1739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:33.144914   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:22:33.144944   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:22:33.144967   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:22:33.144989   12169 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:22:33.145492   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:22:33.170011   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:22:33.193786   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:22:33.217180   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:22:33.243689   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 14:22:33.269669   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 14:22:33.294872   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:22:33.319727   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 14:22:33.342025   12169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:22:33.364374   12169 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 14:22:33.380540   12169 ssh_runner.go:195] Run: openssl version
	I0719 14:22:33.386677   12169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:22:33.396863   12169 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:22:33.401218   12169 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:22:33.401263   12169 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:22:33.407260   12169 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:22:33.418412   12169 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:22:33.422328   12169 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:22:33.422383   12169 kubeadm.go:392] StartCluster: {Name:addons-018825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-018825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:22:33.422445   12169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 14:22:33.422493   12169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 14:22:33.458732   12169 cri.go:89] found id: ""
	I0719 14:22:33.458803   12169 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 14:22:33.469094   12169 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 14:22:33.479138   12169 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 14:22:33.488811   12169 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 14:22:33.488827   12169 kubeadm.go:157] found existing configuration files:
	
	I0719 14:22:33.488868   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 14:22:33.498216   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 14:22:33.498267   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 14:22:33.507521   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 14:22:33.517258   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 14:22:33.517311   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 14:22:33.527026   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 14:22:33.537692   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 14:22:33.537752   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 14:22:33.549407   12169 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 14:22:33.560268   12169 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 14:22:33.560320   12169 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 14:22:33.571360   12169 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 14:22:33.759969   12169 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 14:22:43.594441   12169 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 14:22:43.594535   12169 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 14:22:43.594648   12169 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 14:22:43.594780   12169 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 14:22:43.594902   12169 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 14:22:43.594993   12169 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 14:22:43.596715   12169 out.go:204]   - Generating certificates and keys ...
	I0719 14:22:43.596836   12169 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 14:22:43.596920   12169 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 14:22:43.597021   12169 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 14:22:43.597081   12169 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 14:22:43.597152   12169 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 14:22:43.597204   12169 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 14:22:43.597250   12169 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 14:22:43.597349   12169 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-018825 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0719 14:22:43.597397   12169 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 14:22:43.597521   12169 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-018825 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0719 14:22:43.597619   12169 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 14:22:43.597714   12169 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 14:22:43.597777   12169 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 14:22:43.597887   12169 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 14:22:43.597956   12169 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 14:22:43.598032   12169 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 14:22:43.598115   12169 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 14:22:43.598211   12169 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 14:22:43.598288   12169 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 14:22:43.598372   12169 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 14:22:43.598461   12169 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 14:22:43.599694   12169 out.go:204]   - Booting up control plane ...
	I0719 14:22:43.599790   12169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 14:22:43.599910   12169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 14:22:43.599996   12169 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 14:22:43.600103   12169 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 14:22:43.600173   12169 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 14:22:43.600206   12169 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 14:22:43.600343   12169 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 14:22:43.600445   12169 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 14:22:43.600500   12169 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.204727ms
	I0719 14:22:43.600561   12169 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 14:22:43.600621   12169 kubeadm.go:310] [api-check] The API server is healthy after 5.00122684s
	I0719 14:22:43.600713   12169 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 14:22:43.600819   12169 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 14:22:43.600872   12169 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 14:22:43.601052   12169 kubeadm.go:310] [mark-control-plane] Marking the node addons-018825 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 14:22:43.601103   12169 kubeadm.go:310] [bootstrap-token] Using token: eraloe.nxrwbbfvsota337c
	I0719 14:22:43.602405   12169 out.go:204]   - Configuring RBAC rules ...
	I0719 14:22:43.602489   12169 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 14:22:43.602566   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 14:22:43.602721   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 14:22:43.602890   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 14:22:43.602992   12169 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 14:22:43.603064   12169 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 14:22:43.603168   12169 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 14:22:43.603215   12169 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 14:22:43.603262   12169 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 14:22:43.603268   12169 kubeadm.go:310] 
	I0719 14:22:43.603331   12169 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 14:22:43.603341   12169 kubeadm.go:310] 
	I0719 14:22:43.603434   12169 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 14:22:43.603444   12169 kubeadm.go:310] 
	I0719 14:22:43.603483   12169 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 14:22:43.603533   12169 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 14:22:43.603606   12169 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 14:22:43.603615   12169 kubeadm.go:310] 
	I0719 14:22:43.603675   12169 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 14:22:43.603682   12169 kubeadm.go:310] 
	I0719 14:22:43.603738   12169 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 14:22:43.603747   12169 kubeadm.go:310] 
	I0719 14:22:43.603811   12169 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 14:22:43.603899   12169 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 14:22:43.603978   12169 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 14:22:43.603985   12169 kubeadm.go:310] 
	I0719 14:22:43.604087   12169 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 14:22:43.604196   12169 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 14:22:43.604214   12169 kubeadm.go:310] 
	I0719 14:22:43.604343   12169 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eraloe.nxrwbbfvsota337c \
	I0719 14:22:43.604482   12169 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 14:22:43.604511   12169 kubeadm.go:310] 	--control-plane 
	I0719 14:22:43.604517   12169 kubeadm.go:310] 
	I0719 14:22:43.604600   12169 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 14:22:43.604606   12169 kubeadm.go:310] 
	I0719 14:22:43.604672   12169 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eraloe.nxrwbbfvsota337c \
	I0719 14:22:43.604767   12169 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 14:22:43.604779   12169 cni.go:84] Creating CNI manager for ""
	I0719 14:22:43.604788   12169 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:22:43.606319   12169 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 14:22:43.607351   12169 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 14:22:43.618848   12169 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 14:22:43.637485   12169 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 14:22:43.637587   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:43.637592   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-018825 minikube.k8s.io/updated_at=2024_07_19T14_22_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=addons-018825 minikube.k8s.io/primary=true
	I0719 14:22:43.666008   12169 ops.go:34] apiserver oom_adj: -16
	I0719 14:22:43.755568   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:44.256041   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:44.756047   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:45.255900   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:45.756223   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:46.255574   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:46.756020   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:47.256563   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:47.755712   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:48.256408   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:48.755625   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:49.256184   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:49.755626   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:50.255898   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:50.756456   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:51.256129   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:51.756230   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:52.255832   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:52.755705   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:53.256233   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:53.755564   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:54.256459   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:54.755707   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:55.255816   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:55.756453   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:56.256337   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:56.755764   12169 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:22:56.861922   12169 kubeadm.go:1113] duration metric: took 13.224402674s to wait for elevateKubeSystemPrivileges
	I0719 14:22:56.861980   12169 kubeadm.go:394] duration metric: took 23.439599918s to StartCluster
	I0719 14:22:56.862008   12169 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:56.862149   12169 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:22:56.862640   12169 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:22:56.862879   12169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 14:22:56.862891   12169 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 14:22:56.862975   12169 addons.go:69] Setting yakd=true in profile "addons-018825"
	I0719 14:22:56.862869   12169 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:22:56.863049   12169 addons.go:69] Setting registry=true in profile "addons-018825"
	I0719 14:22:56.863168   12169 addons.go:234] Setting addon registry=true in "addons-018825"
	I0719 14:22:56.863206   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863051   12169 addons.go:69] Setting ingress-dns=true in profile "addons-018825"
	I0719 14:22:56.863299   12169 addons.go:234] Setting addon ingress-dns=true in "addons-018825"
	I0719 14:22:56.863353   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863019   12169 addons.go:69] Setting inspektor-gadget=true in profile "addons-018825"
	I0719 14:22:56.863494   12169 addons.go:234] Setting addon inspektor-gadget=true in "addons-018825"
	I0719 14:22:56.863536   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863030   12169 addons.go:69] Setting metrics-server=true in profile "addons-018825"
	I0719 14:22:56.863595   12169 addons.go:234] Setting addon metrics-server=true in "addons-018825"
	I0719 14:22:56.863640   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863031   12169 addons.go:69] Setting helm-tiller=true in profile "addons-018825"
	I0719 14:22:56.863681   12169 addons.go:234] Setting addon helm-tiller=true in "addons-018825"
	I0719 14:22:56.863701   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.863720   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863041   12169 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-018825"
	I0719 14:22:56.863752   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.863775   12169 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-018825"
	I0719 14:22:56.863805   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.863042   12169 addons.go:69] Setting ingress=true in profile "addons-018825"
	I0719 14:22:56.863849   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.863865   12169 addons.go:234] Setting addon ingress=true in "addons-018825"
	I0719 14:22:56.863897   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.863929   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.863979   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.863052   12169 addons.go:69] Setting gcp-auth=true in profile "addons-018825"
	I0719 14:22:56.864074   12169 mustload.go:65] Loading cluster: addons-018825
	I0719 14:22:56.863062   12169 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-018825"
	I0719 14:22:56.864092   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864106   12169 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-018825"
	I0719 14:22:56.863064   12169 addons.go:69] Setting default-storageclass=true in profile "addons-018825"
	I0719 14:22:56.864126   12169 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-018825"
	I0719 14:22:56.863072   12169 addons.go:69] Setting volcano=true in profile "addons-018825"
	I0719 14:22:56.864133   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864155   12169 addons.go:234] Setting addon volcano=true in "addons-018825"
	I0719 14:22:56.863077   12169 addons.go:69] Setting volumesnapshots=true in profile "addons-018825"
	I0719 14:22:56.864174   12169 addons.go:234] Setting addon volumesnapshots=true in "addons-018825"
	I0719 14:22:56.863078   12169 addons.go:69] Setting cloud-spanner=true in profile "addons-018825"
	I0719 14:22:56.864195   12169 addons.go:234] Setting addon cloud-spanner=true in "addons-018825"
	I0719 14:22:56.863021   12169 addons.go:234] Setting addon yakd=true in "addons-018825"
	I0719 14:22:56.863075   12169 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-018825"
	I0719 14:22:56.864239   12169 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-018825"
	I0719 14:22:56.863092   12169 config.go:182] Loaded profile config "addons-018825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:22:56.863089   12169 addons.go:69] Setting storage-provisioner=true in profile "addons-018825"
	I0719 14:22:56.864265   12169 addons.go:234] Setting addon storage-provisioner=true in "addons-018825"
	I0719 14:22:56.864277   12169 config.go:182] Loaded profile config "addons-018825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:22:56.864517   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864587   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864596   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864606   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864625   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864690   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864846   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864874   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864872   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.864903   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.864519   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864982   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.864692   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865065   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865319   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865343   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865353   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.865410   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.865524   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865600   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865661   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865685   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.865562   12169 out.go:177] * Verifying Kubernetes components...
	I0719 14:22:56.865757   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.865959   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.866318   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.867361   12169 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:22:56.884345   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34869
	I0719 14:22:56.884879   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.885266   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0719 14:22:56.885327   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0719 14:22:56.885551   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.885589   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.885861   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.886405   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.886425   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.886494   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.886849   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.886909   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.887286   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.887420   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.887434   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.887500   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.887524   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.887885   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.888365   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.888390   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.888615   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41637
	I0719 14:22:56.888986   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.889459   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.889476   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.889911   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.890459   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.890487   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.895482   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0719 14:22:56.895499   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0719 14:22:56.898759   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.898802   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.898846   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.899457   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.899486   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.899890   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.900624   12169 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-018825"
	I0719 14:22:56.900674   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.900637   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.900747   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.901023   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.901072   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.901822   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.901860   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.912801   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.913113   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.913146   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.913490   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.913514   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.913897   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.914073   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.916225   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.916617   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.916660   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.935749   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0719 14:22:56.938230   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0719 14:22:56.938372   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0719 14:22:56.938449   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34913
	I0719 14:22:56.939040   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.939248   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.939351   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.939621   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.939634   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.939754   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.939767   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.939872   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.939880   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.939934   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.940308   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.940446   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.940457   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.940511   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.940947   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.940975   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.941174   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.941235   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0719 14:22:56.941381   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.941404   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.941573   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.942269   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.942451   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.942682   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.942705   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.943036   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.943286   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.943482   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.943966   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.944731   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.944802   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0719 14:22:56.945291   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.945893   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.946056   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 14:22:56.946084   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.946096   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.946255   12169 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 14:22:56.946450   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.947372   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.947436   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.947485   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 14:22:56.947502   12169 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 14:22:56.947521   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.947634   12169 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 14:22:56.947705   12169 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 14:22:56.948863   12169 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 14:22:56.948880   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 14:22:56.948897   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.948973   12169 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 14:22:56.949114   12169 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 14:22:56.949124   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 14:22:56.949138   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.950420   12169 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 14:22:56.950434   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 14:22:56.950449   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.950736   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36255
	I0719 14:22:56.951539   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.952333   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.952349   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.952598   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.952682   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.953031   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.953064   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.953241   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.953243   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.953450   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.953766   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.953942   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.956646   12169 addons.go:234] Setting addon default-storageclass=true in "addons-018825"
	I0719 14:22:56.956690   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:22:56.957044   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.957063   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.957251   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.957425   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.958370   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.958707   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.958726   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.959103   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.959119   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.959363   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.959553   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.959607   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.959751   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.959805   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.959818   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.959912   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.959942   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.959968   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.960114   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.960152   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.960241   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.960280   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.960440   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.965008   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0719 14:22:56.965405   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.965995   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.966014   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.966401   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.966661   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.968344   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.969783   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45971
	I0719 14:22:56.970137   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.970352   12169 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 14:22:56.970805   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.970824   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.971738   12169 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 14:22:56.971757   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 14:22:56.971774   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:56.971923   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.972110   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:56.974026   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0719 14:22:56.974508   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0719 14:22:56.974655   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.975270   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.975351   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.975572   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.975839   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:56.975858   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:56.975872   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:22:56.975880   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:22:56.976121   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:22:56.976145   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:22:56.976154   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:56.976158   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:22:56.976168   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:22:56.976176   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:22:56.976215   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.976229   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.976325   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:56.976416   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0719 14:22:56.976490   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:56.976628   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:56.977130   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.977175   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.977190   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.977711   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.977735   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.977963   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:22:56.977984   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:22:56.978005   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	W0719 14:22:56.978080   12169 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 14:22:56.978381   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.978911   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.978938   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.978964   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
	I0719 14:22:56.979481   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.979608   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.980052   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.980068   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.980239   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.980262   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.980549   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.980597   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.981095   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.981139   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.981330   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I0719 14:22:56.981813   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.981839   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.982487   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.982760   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37639
	I0719 14:22:56.983056   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.983073   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.983135   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.983203   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0719 14:22:56.984245   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.984269   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.984280   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.984497   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.984576   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.984801   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.984842   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.985110   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.985148   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.986705   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.986732   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.990714   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.991339   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.991385   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:56.993109   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0719 14:22:56.995644   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.996106   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.996124   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.996548   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.996794   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:56.998149   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0719 14:22:56.998709   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:56.999126   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:56.999146   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:56.999386   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:56.999766   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:56.999798   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:57.004765   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0719 14:22:57.005205   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.005745   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.005767   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.005823   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0719 14:22:57.006169   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.006658   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.006681   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.006800   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.007057   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.007180   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.007230   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.009413   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.009480   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.011403   12169 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 14:22:57.011456   12169 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 14:22:57.012357   12169 out.go:177]   - Using image docker.io/busybox:stable
	I0719 14:22:57.012538   12169 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 14:22:57.012550   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 14:22:57.012569   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.014012   12169 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 14:22:57.014028   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 14:22:57.014043   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.016496   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40195
	I0719 14:22:57.016912   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.016991   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.017546   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.017563   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.018207   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.018399   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.018417   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.018555   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.019244   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.019305   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
	I0719 14:22:57.019429   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.019594   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.019733   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.019790   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.019808   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.019931   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.020174   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.020328   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.020477   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.020590   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.021351   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0719 14:22:57.021950   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.022366   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.025247   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0719 14:22:57.025826   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.025912   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.025915   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.025931   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.026345   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.026360   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.026401   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.026829   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:22:57.026855   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:22:57.027086   12169 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 14:22:57.027295   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0719 14:22:57.027089   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.027423   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.027411   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.027744   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.027826   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.027886   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.028064   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.028459   12169 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:22:57.028476   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 14:22:57.028492   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.029189   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.029207   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.029189   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38505
	I0719 14:22:57.029628   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.029887   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.030098   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.030115   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.030167   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.030969   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.031139   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.032270   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.033014   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.033052   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.033500   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.033912   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0719 14:22:57.033920   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.033949   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.034128   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.034174   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.034396   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.034461   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.034623   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.034746   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.034749   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.034767   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.034890   12169 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 14:22:57.034955   12169 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 14:22:57.035048   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.035568   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.035751   12169 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 14:22:57.035753   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 14:22:57.036533   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 14:22:57.036551   12169 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 14:22:57.036569   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.036626   12169 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 14:22:57.036636   12169 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 14:22:57.036648   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.037057   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.037683   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 14:22:57.037701   12169 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 14:22:57.037719   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.038558   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 14:22:57.039747   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 14:22:57.040468   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.040498   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.040894   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 14:22:57.041039   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.041060   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.041145   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.041163   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.041747   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.041784   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.041943   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.041998   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.042107   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.042150   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 14:22:57.042287   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.042326   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.042336   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.042346   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.042499   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.042585   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.042890   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.043060   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.043255   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.043496   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.044737   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 14:22:57.044803   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 14:22:57.045867   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 14:22:57.046028   12169 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 14:22:57.046042   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 14:22:57.046056   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.048006   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 14:22:57.049056   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.049076   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 14:22:57.049412   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.049430   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.049602   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.049758   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.049870   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.050099   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.050995   12169 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 14:22:57.052192   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 14:22:57.052209   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 14:22:57.052226   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.052286   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0719 14:22:57.052636   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:22:57.053253   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:22:57.053278   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:22:57.053576   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:22:57.053815   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:22:57.055614   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.055630   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:22:57.055840   12169 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 14:22:57.055855   12169 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 14:22:57.055870   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:22:57.055981   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.056036   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.056128   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.056294   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.056455   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	W0719 14:22:57.056608   12169 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53052->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.056633   12169 retry.go:31] will retry after 351.6847ms: ssh: handshake failed: read tcp 192.168.39.1:53052->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.056667   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:22:57.058375   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.058744   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:22:57.058776   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:22:57.058879   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:22:57.059039   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:22:57.059152   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:22:57.059259   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	W0719 14:22:57.091213   12169 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53062->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.091251   12169 retry.go:31] will retry after 357.490865ms: ssh: handshake failed: read tcp 192.168.39.1:53062->192.168.39.100:22: read: connection reset by peer
	I0719 14:22:57.374796   12169 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:22:57.374821   12169 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 14:22:57.430996   12169 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 14:22:57.431024   12169 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 14:22:57.522222   12169 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 14:22:57.522264   12169 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 14:22:57.527111   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 14:22:57.527129   12169 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 14:22:57.533144   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 14:22:57.559975   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 14:22:57.562578   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 14:22:57.565842   12169 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 14:22:57.565866   12169 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 14:22:57.604073   12169 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 14:22:57.604098   12169 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 14:22:57.640449   12169 node_ready.go:35] waiting up to 6m0s for node "addons-018825" to be "Ready" ...
	I0719 14:22:57.643361   12169 node_ready.go:49] node "addons-018825" has status "Ready":"True"
	I0719 14:22:57.643379   12169 node_ready.go:38] duration metric: took 2.907219ms for node "addons-018825" to be "Ready" ...
	I0719 14:22:57.643386   12169 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:22:57.658390   12169 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace to be "Ready" ...
	I0719 14:22:57.704488   12169 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 14:22:57.704518   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 14:22:57.706206   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 14:22:57.706223   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 14:22:57.708594   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 14:22:57.711566   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:22:57.736581   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 14:22:57.736613   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 14:22:57.760172   12169 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 14:22:57.760197   12169 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 14:22:57.787931   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 14:22:57.787959   12169 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 14:22:57.896690   12169 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 14:22:57.896716   12169 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 14:22:57.903872   12169 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 14:22:57.903901   12169 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 14:22:57.915405   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 14:22:57.915430   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 14:22:57.988334   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 14:22:58.039939   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 14:22:58.087755   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 14:22:58.087787   12169 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 14:22:58.087754   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 14:22:58.087829   12169 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 14:22:58.106668   12169 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 14:22:58.106694   12169 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 14:22:58.110836   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 14:22:58.126079   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 14:22:58.134336   12169 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 14:22:58.134359   12169 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 14:22:58.171247   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 14:22:58.171272   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 14:22:58.249777   12169 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 14:22:58.249800   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 14:22:58.275914   12169 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 14:22:58.275936   12169 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 14:22:58.282467   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 14:22:58.282501   12169 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 14:22:58.356829   12169 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 14:22:58.356861   12169 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 14:22:58.395076   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 14:22:58.395103   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 14:22:58.444496   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 14:22:58.453193   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 14:22:58.512057   12169 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 14:22:58.512090   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 14:22:58.621826   12169 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 14:22:58.621853   12169 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 14:22:58.738865   12169 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 14:22:58.738893   12169 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 14:22:58.805923   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 14:22:58.940276   12169 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 14:22:58.940301   12169 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 14:22:59.098411   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 14:22:59.098435   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 14:22:59.290689   12169 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 14:22:59.290710   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 14:22:59.494508   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 14:22:59.494537   12169 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 14:22:59.664638   12169 pod_ready.go:102] pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace has status "Ready":"False"
	I0719 14:22:59.689805   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 14:22:59.792431   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 14:22:59.792463   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 14:22:59.869427   12169 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.494575379s)
	I0719 14:22:59.869463   12169 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 14:23:00.309325   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 14:23:00.309353   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 14:23:00.384675   12169 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-018825" context rescaled to 1 replicas
	I0719 14:23:00.487227   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.954050452s)
	I0719 14:23:00.487294   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487306   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487302   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.927287998s)
	I0719 14:23:00.487347   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487362   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487382   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.924775843s)
	I0719 14:23:00.487417   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487430   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487721   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.487738   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.487747   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487756   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487803   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.487825   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.487833   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487841   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487810   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.487774   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.487868   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.487893   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:00.487923   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:00.487777   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.488062   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.488085   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.488147   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.488197   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.488215   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.488257   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:00.489885   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:00.489902   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:00.645104   12169 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 14:23:00.645135   12169 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 14:23:00.980284   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 14:23:01.804104   12169 pod_ready.go:92] pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:01.804130   12169 pod_ready.go:81] duration metric: took 4.145709079s for pod "coredns-7db6d8ff4d-88nlf" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:01.804177   12169 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-t6d29" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:01.948784   12169 pod_ready.go:92] pod "coredns-7db6d8ff4d-t6d29" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:01.948810   12169 pod_ready.go:81] duration metric: took 144.623131ms for pod "coredns-7db6d8ff4d-t6d29" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:01.948822   12169 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.104324   12169 pod_ready.go:92] pod "etcd-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.104347   12169 pod_ready.go:81] duration metric: took 155.517694ms for pod "etcd-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.104355   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.236324   12169 pod_ready.go:92] pod "kube-apiserver-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.236348   12169 pod_ready.go:81] duration metric: took 131.984509ms for pod "kube-apiserver-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.236359   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.305861   12169 pod_ready.go:92] pod "kube-controller-manager-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.305880   12169 pod_ready.go:81] duration metric: took 69.514726ms for pod "kube-controller-manager-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.305891   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qkf6b" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.424938   12169 pod_ready.go:92] pod "kube-proxy-qkf6b" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.424959   12169 pod_ready.go:81] duration metric: took 119.061404ms for pod "kube-proxy-qkf6b" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.424969   12169 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.542930   12169 pod_ready.go:92] pod "kube-scheduler-addons-018825" in "kube-system" namespace has status "Ready":"True"
	I0719 14:23:02.542954   12169 pod_ready.go:81] duration metric: took 117.97896ms for pod "kube-scheduler-addons-018825" in "kube-system" namespace to be "Ready" ...
	I0719 14:23:02.542963   12169 pod_ready.go:38] duration metric: took 4.899567394s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:23:02.542976   12169 api_server.go:52] waiting for apiserver process to appear ...
	I0719 14:23:02.543026   12169 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:23:02.879319   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.170691731s)
	I0719 14:23:02.879379   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879393   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879423   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.167812591s)
	I0719 14:23:02.879464   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.891088657s)
	I0719 14:23:02.879479   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879488   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.839512367s)
	I0719 14:23:02.879517   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879535   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879494   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879495   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879590   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879724   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879773   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879781   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.879789   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879795   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879855   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879864   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879875   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.879878   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879899   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.879906   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.879913   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879950   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879977   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.879977   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.879992   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880001   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880009   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880018   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.880027   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.879884   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.880086   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.880145   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.880169   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880175   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880285   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.880312   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880319   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.880327   12169 addons.go:475] Verifying addon registry=true in "addons-018825"
	I0719 14:23:02.880340   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:02.880374   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.880381   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.883210   12169 out.go:177] * Verifying registry addon...
	I0719 14:23:02.885686   12169 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 14:23:02.912166   12169 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 14:23:02.912193   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:02.984866   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:02.984885   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:02.985178   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:02.985201   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:02.985228   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:03.407827   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:03.891248   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:04.007417   12169 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 14:23:04.007453   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:23:04.010439   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.010803   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:23:04.010832   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.011001   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:23:04.011212   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:23:04.011394   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:23:04.011519   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:23:04.274446   12169 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 14:23:04.391335   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:04.532731   12169 addons.go:234] Setting addon gcp-auth=true in "addons-018825"
	I0719 14:23:04.532781   12169 host.go:66] Checking if "addons-018825" exists ...
	I0719 14:23:04.533078   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:23:04.533103   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:23:04.547767   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33795
	I0719 14:23:04.548234   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:23:04.548748   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:23:04.548773   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:23:04.549090   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:23:04.549691   12169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:23:04.549722   12169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:23:04.564909   12169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0719 14:23:04.565446   12169 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:23:04.566002   12169 main.go:141] libmachine: Using API Version  1
	I0719 14:23:04.566032   12169 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:23:04.566452   12169 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:23:04.566648   12169 main.go:141] libmachine: (addons-018825) Calling .GetState
	I0719 14:23:04.568337   12169 main.go:141] libmachine: (addons-018825) Calling .DriverName
	I0719 14:23:04.568614   12169 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 14:23:04.568647   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHHostname
	I0719 14:23:04.571701   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.572188   12169 main.go:141] libmachine: (addons-018825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:72:1e", ip: ""} in network mk-addons-018825: {Iface:virbr1 ExpiryTime:2024-07-19 15:22:16 +0000 UTC Type:0 Mac:52:54:00:7c:72:1e Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-018825 Clientid:01:52:54:00:7c:72:1e}
	I0719 14:23:04.572216   12169 main.go:141] libmachine: (addons-018825) DBG | domain addons-018825 has defined IP address 192.168.39.100 and MAC address 52:54:00:7c:72:1e in network mk-addons-018825
	I0719 14:23:04.572415   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHPort
	I0719 14:23:04.572613   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHKeyPath
	I0719 14:23:04.572748   12169 main.go:141] libmachine: (addons-018825) Calling .GetSSHUsername
	I0719 14:23:04.572890   12169 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/addons-018825/id_rsa Username:docker}
	I0719 14:23:04.895504   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:05.395962   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:05.913261   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:06.127498   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.01662294s)
	I0719 14:23:06.127545   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127557   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127506   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.001392587s)
	I0719 14:23:06.127593   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127607   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.68306074s)
	I0719 14:23:06.127620   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127649   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127660   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127692   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.674451805s)
	I0719 14:23:06.127735   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.127755   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.127785   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.321825889s)
	W0719 14:23:06.127812   12169 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 14:23:06.127844   12169 retry.go:31] will retry after 338.028309ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 14:23:06.128103   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128110   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128113   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128131   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128135   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128138   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128145   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128149   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128140   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128162   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128168   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128175   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128174   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.438334905s)
	I0719 14:23:06.128183   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128194   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128204   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128169   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128340   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128373   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128395   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128403   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128402   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128438   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128445   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128558   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128581   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128588   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128594   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.128601   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.128784   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.128803   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.128809   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.128983   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.129004   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.129013   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.129013   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.129021   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.129022   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.129031   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.129032   12169 addons.go:475] Verifying addon metrics-server=true in "addons-018825"
	I0719 14:23:06.129917   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.129964   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.129982   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.129999   12169 addons.go:475] Verifying addon ingress=true in "addons-018825"
	I0719 14:23:06.131845   12169 out.go:177] * Verifying ingress addon...
	I0719 14:23:06.131859   12169 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-018825 service yakd-dashboard -n yakd-dashboard
	
	I0719 14:23:06.134713   12169 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 14:23:06.147490   12169 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 14:23:06.147510   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:06.185911   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:06.185932   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:06.186291   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:06.186310   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:06.186333   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:06.390185   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:06.466076   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 14:23:06.644688   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:06.923572   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:07.160370   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:07.163148   12169 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.620100174s)
	I0719 14:23:07.163189   12169 api_server.go:72] duration metric: took 10.300180868s to wait for apiserver process to appear ...
	I0719 14:23:07.163196   12169 api_server.go:88] waiting for apiserver healthz status ...
	I0719 14:23:07.163195   12169 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.594552094s)
	I0719 14:23:07.163214   12169 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0719 14:23:07.163853   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.183524063s)
	I0719 14:23:07.163892   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:07.163913   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:07.164179   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:07.164195   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:07.164207   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:07.164222   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:07.164233   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:07.164527   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:07.164547   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:07.164558   12169 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-018825"
	I0719 14:23:07.164769   12169 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 14:23:07.165922   12169 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 14:23:07.167636   12169 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 14:23:07.168299   12169 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 14:23:07.169015   12169 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 14:23:07.169033   12169 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 14:23:07.174872   12169 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0719 14:23:07.176820   12169 api_server.go:141] control plane version: v1.30.3
	I0719 14:23:07.176843   12169 api_server.go:131] duration metric: took 13.640213ms to wait for apiserver health ...
	I0719 14:23:07.176852   12169 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 14:23:07.206861   12169 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 14:23:07.206884   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:07.221671   12169 system_pods.go:59] 19 kube-system pods found
	I0719 14:23:07.221699   12169 system_pods.go:61] "coredns-7db6d8ff4d-88nlf" [6469d6b1-e474-4454-8359-e084930e879c] Running
	I0719 14:23:07.221703   12169 system_pods.go:61] "coredns-7db6d8ff4d-t6d29" [388f181c-2c70-4115-b39c-a0cc5d9548aa] Running
	I0719 14:23:07.221709   12169 system_pods.go:61] "csi-hostpath-attacher-0" [324e961d-ccdf-4cac-9736-a5a22192761c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 14:23:07.221713   12169 system_pods.go:61] "csi-hostpath-resizer-0" [c715f347-d341-4f6d-a2e8-ad1d7984ea15] Pending
	I0719 14:23:07.221723   12169 system_pods.go:61] "csi-hostpathplugin-4xs8c" [7ba367f1-c7ae-4bf5-bc2e-3bbd75010f18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 14:23:07.221727   12169 system_pods.go:61] "etcd-addons-018825" [2b837606-a3c7-4683-b31d-b43122758097] Running
	I0719 14:23:07.221730   12169 system_pods.go:61] "kube-apiserver-addons-018825" [b6fcbfe0-a44a-42bb-a757-ee784dd55ab9] Running
	I0719 14:23:07.221733   12169 system_pods.go:61] "kube-controller-manager-addons-018825" [68b911e6-14f5-4e65-b9a0-4db60638da8c] Running
	I0719 14:23:07.221738   12169 system_pods.go:61] "kube-ingress-dns-minikube" [543d1957-29b4-4f11-a3ef-a50baed9131f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0719 14:23:07.221741   12169 system_pods.go:61] "kube-proxy-qkf6b" [fd641a61-241c-4387-86e8-432a465cb34d] Running
	I0719 14:23:07.221744   12169 system_pods.go:61] "kube-scheduler-addons-018825" [2a0ca51b-e5b5-45f2-bbcb-b8d1ec175fd2] Running
	I0719 14:23:07.221748   12169 system_pods.go:61] "metrics-server-c59844bb4-p76dw" [4f3616b2-3dcb-414f-930a-494df347f25f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 14:23:07.221754   12169 system_pods.go:61] "nvidia-device-plugin-daemonset-6bcnd" [ec6c8a36-43a7-42bd-bb5d-9840f023356c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 14:23:07.221761   12169 system_pods.go:61] "registry-656c9c8d9c-k884k" [f109574c-299a-469d-94a4-ad81e51b9efa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 14:23:07.221766   12169 system_pods.go:61] "registry-proxy-jq9hm" [90bf1ad6-3f9b-465b-aaa2-0d77bd8970a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 14:23:07.221770   12169 system_pods.go:61] "snapshot-controller-745499f584-9xmxh" [ae2b17e9-c4ba-43cc-8c77-8e6e7e3482d9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.221782   12169 system_pods.go:61] "snapshot-controller-745499f584-wvpct" [a7f7dd53-317c-497c-89aa-2440d0bd45bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.221785   12169 system_pods.go:61] "storage-provisioner" [7aaff945-7762-4f72-9ca2-8d34dd65bf35] Running
	I0719 14:23:07.221789   12169 system_pods.go:61] "tiller-deploy-6677d64bcd-c8ct4" [f5d05cf3-2614-4ccf-9d6f-5afb52d9c031] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 14:23:07.221796   12169 system_pods.go:74] duration metric: took 44.937964ms to wait for pod list to return data ...
	I0719 14:23:07.221806   12169 default_sa.go:34] waiting for default service account to be created ...
	I0719 14:23:07.253917   12169 default_sa.go:45] found service account: "default"
	I0719 14:23:07.253942   12169 default_sa.go:55] duration metric: took 32.130367ms for default service account to be created ...
	I0719 14:23:07.253951   12169 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 14:23:07.282822   12169 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 14:23:07.282846   12169 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 14:23:07.285919   12169 system_pods.go:86] 19 kube-system pods found
	I0719 14:23:07.285951   12169 system_pods.go:89] "coredns-7db6d8ff4d-88nlf" [6469d6b1-e474-4454-8359-e084930e879c] Running
	I0719 14:23:07.285960   12169 system_pods.go:89] "coredns-7db6d8ff4d-t6d29" [388f181c-2c70-4115-b39c-a0cc5d9548aa] Running
	I0719 14:23:07.285971   12169 system_pods.go:89] "csi-hostpath-attacher-0" [324e961d-ccdf-4cac-9736-a5a22192761c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 14:23:07.285979   12169 system_pods.go:89] "csi-hostpath-resizer-0" [c715f347-d341-4f6d-a2e8-ad1d7984ea15] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 14:23:07.285987   12169 system_pods.go:89] "csi-hostpathplugin-4xs8c" [7ba367f1-c7ae-4bf5-bc2e-3bbd75010f18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 14:23:07.286001   12169 system_pods.go:89] "etcd-addons-018825" [2b837606-a3c7-4683-b31d-b43122758097] Running
	I0719 14:23:07.286013   12169 system_pods.go:89] "kube-apiserver-addons-018825" [b6fcbfe0-a44a-42bb-a757-ee784dd55ab9] Running
	I0719 14:23:07.286021   12169 system_pods.go:89] "kube-controller-manager-addons-018825" [68b911e6-14f5-4e65-b9a0-4db60638da8c] Running
	I0719 14:23:07.286050   12169 system_pods.go:89] "kube-ingress-dns-minikube" [543d1957-29b4-4f11-a3ef-a50baed9131f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0719 14:23:07.286060   12169 system_pods.go:89] "kube-proxy-qkf6b" [fd641a61-241c-4387-86e8-432a465cb34d] Running
	I0719 14:23:07.286064   12169 system_pods.go:89] "kube-scheduler-addons-018825" [2a0ca51b-e5b5-45f2-bbcb-b8d1ec175fd2] Running
	I0719 14:23:07.286069   12169 system_pods.go:89] "metrics-server-c59844bb4-p76dw" [4f3616b2-3dcb-414f-930a-494df347f25f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 14:23:07.286076   12169 system_pods.go:89] "nvidia-device-plugin-daemonset-6bcnd" [ec6c8a36-43a7-42bd-bb5d-9840f023356c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 14:23:07.286088   12169 system_pods.go:89] "registry-656c9c8d9c-k884k" [f109574c-299a-469d-94a4-ad81e51b9efa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 14:23:07.286094   12169 system_pods.go:89] "registry-proxy-jq9hm" [90bf1ad6-3f9b-465b-aaa2-0d77bd8970a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 14:23:07.286102   12169 system_pods.go:89] "snapshot-controller-745499f584-9xmxh" [ae2b17e9-c4ba-43cc-8c77-8e6e7e3482d9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.286109   12169 system_pods.go:89] "snapshot-controller-745499f584-wvpct" [a7f7dd53-317c-497c-89aa-2440d0bd45bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 14:23:07.286113   12169 system_pods.go:89] "storage-provisioner" [7aaff945-7762-4f72-9ca2-8d34dd65bf35] Running
	I0719 14:23:07.286119   12169 system_pods.go:89] "tiller-deploy-6677d64bcd-c8ct4" [f5d05cf3-2614-4ccf-9d6f-5afb52d9c031] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 14:23:07.286128   12169 system_pods.go:126] duration metric: took 32.171835ms to wait for k8s-apps to be running ...
	I0719 14:23:07.286135   12169 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 14:23:07.286177   12169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:23:07.362900   12169 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 14:23:07.362922   12169 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 14:23:07.391761   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:07.441153   12169 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 14:23:07.639752   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:07.686052   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:07.898628   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:08.139791   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:08.174892   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:08.393795   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:08.497937   12169 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.211737053s)
	I0719 14:23:08.497989   12169 system_svc.go:56] duration metric: took 1.211834363s WaitForService to wait for kubelet
	I0719 14:23:08.498000   12169 kubeadm.go:582] duration metric: took 11.63499022s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:23:08.498022   12169 node_conditions.go:102] verifying NodePressure condition ...
	I0719 14:23:08.498674   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.032532328s)
	I0719 14:23:08.498734   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:08.498755   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:08.499070   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:08.499134   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:08.499143   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:08.499159   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:08.499167   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:08.499375   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:08.499451   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:08.499462   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:08.504146   12169 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:23:08.504168   12169 node_conditions.go:123] node cpu capacity is 2
	I0719 14:23:08.504180   12169 node_conditions.go:105] duration metric: took 6.152829ms to run NodePressure ...
	I0719 14:23:08.504194   12169 start.go:241] waiting for startup goroutines ...
	I0719 14:23:08.638663   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:08.675873   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:08.959505   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:09.067561   12169 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.626361493s)
	I0719 14:23:09.067624   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:09.067642   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:09.067955   12169 main.go:141] libmachine: (addons-018825) DBG | Closing plugin on server side
	I0719 14:23:09.068014   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:09.068025   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:09.068038   12169 main.go:141] libmachine: Making call to close driver server
	I0719 14:23:09.068046   12169 main.go:141] libmachine: (addons-018825) Calling .Close
	I0719 14:23:09.068274   12169 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:23:09.068320   12169 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:23:09.070217   12169 addons.go:475] Verifying addon gcp-auth=true in "addons-018825"
	I0719 14:23:09.071882   12169 out.go:177] * Verifying gcp-auth addon...
	I0719 14:23:09.074074   12169 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 14:23:09.089191   12169 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 14:23:09.089222   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:09.148173   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:09.183692   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:09.390922   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:09.578499   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:09.640142   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:09.674945   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:09.890550   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:10.080360   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:10.140746   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:10.175692   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:10.390496   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:10.578943   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:10.639790   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:10.674734   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:10.891343   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:11.077235   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:11.140280   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:11.174816   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:11.391452   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:11.577866   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:11.639822   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:11.676853   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:11.891725   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:12.077705   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:12.139117   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:12.174390   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:12.390279   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:12.579319   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:12.639857   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:12.674067   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:13.016315   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:13.080513   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:13.139691   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:13.190397   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:13.390860   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:13.578055   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:13.640689   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:13.673991   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:13.890457   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:14.077916   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:14.139575   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:14.178283   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:14.391033   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:14.577298   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:14.639167   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:14.673530   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:14.890099   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:15.078050   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:15.139817   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:15.176544   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:15.389661   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:15.578152   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:15.640765   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:15.674076   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:15.891501   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:16.077518   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:16.138991   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:16.173551   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:16.389786   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:16.577883   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:16.639814   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:16.674356   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:16.891146   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:17.078498   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:17.139496   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:17.174372   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:17.390793   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:17.578482   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:17.639643   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:17.677576   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:17.890172   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:18.078906   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:18.139874   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:18.174417   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:18.392732   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:18.579446   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:18.639826   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:18.673242   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:18.890433   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:19.076952   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:19.140064   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:19.176726   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:19.390535   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:19.577336   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:19.641603   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:19.674621   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:19.891647   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:20.077440   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:20.138933   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:20.174538   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:20.390041   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:20.578578   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:20.640970   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:20.677311   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:20.891038   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:21.080158   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:21.140940   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:21.181419   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:21.389738   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:21.577442   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:21.639174   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:21.674456   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:22.219799   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:22.223563   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:22.224377   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:22.233671   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:22.393021   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:22.577811   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:22.639329   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:22.673564   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:22.890215   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:23.077828   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:23.140932   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:23.174169   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:23.393007   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:23.580062   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:23.640594   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:23.674794   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:23.891243   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:24.079186   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:24.142738   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:24.174851   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:24.393109   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:24.578330   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:24.638726   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:24.680561   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:24.890430   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:25.078589   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:25.139672   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:25.174175   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:25.391245   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:25.578058   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:25.639491   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:25.673629   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:25.890802   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:26.078315   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:26.141185   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:26.175425   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:26.393239   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:26.578148   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:26.638706   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:26.673773   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:26.890921   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:27.078192   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:27.139401   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:27.174151   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:27.390368   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:27.577334   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:27.639371   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:27.674403   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:27.890993   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:28.077722   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:28.139565   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:28.175026   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:28.390304   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:28.577850   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:28.640095   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:28.672887   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:28.890036   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:29.078027   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:29.139836   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:29.173732   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:29.391742   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:29.578351   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:29.639083   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:29.674203   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:29.890644   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:30.077879   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:30.139546   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:30.174792   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:30.390248   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:30.578798   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:30.640079   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:30.674039   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:30.890980   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:31.078380   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:31.139997   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:31.172703   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:31.391041   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:31.578567   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:31.643445   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:31.674107   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:31.892495   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:32.077533   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:32.139462   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:32.174095   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:32.390808   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:32.577905   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:32.639599   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:32.674109   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:32.890466   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:33.077984   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:33.139703   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:33.179896   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:33.391628   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:33.966141   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:33.966391   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:33.967311   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:33.969556   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:34.077338   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:34.138763   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:34.174418   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:34.392507   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:34.578439   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:34.639480   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:34.674201   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:34.891012   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:35.078038   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:35.138916   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:35.175055   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:35.390628   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:35.577896   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:35.639415   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:35.673627   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:35.892851   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:36.079534   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:36.139810   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:36.174674   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:36.391174   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:36.577841   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:36.640276   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:36.675547   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:36.891595   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:37.077744   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:37.139827   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:37.176158   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:37.392560   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:37.578295   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:37.641879   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:37.676042   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:38.114496   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:38.114656   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:38.339762   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:38.341832   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:38.390903   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:38.577826   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:38.639813   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:38.673952   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:38.890839   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:39.079666   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:39.140167   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:39.174255   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:39.391085   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:39.580376   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:39.640729   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:39.676466   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:39.890500   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:40.078938   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:40.140294   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:40.174570   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:40.390484   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:40.577609   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:40.639498   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:40.673992   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:40.892738   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:41.078167   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:41.138600   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:41.173886   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:41.390716   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:41.594855   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:41.640027   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:41.677550   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:42.137361   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:42.138111   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:42.161283   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:42.175453   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:42.391340   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:42.578531   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:42.639663   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:42.675245   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:42.892488   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:43.077295   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:43.138689   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:43.174498   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:43.390871   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:43.577506   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:43.639915   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:43.673623   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:43.891302   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:44.078088   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:44.139348   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:44.184873   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:44.392916   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:44.578401   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:44.639533   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:44.674778   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:44.892898   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:45.078064   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:45.139548   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:45.173400   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:45.391161   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:45.578566   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:45.639517   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:45.674426   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:45.890419   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:46.077448   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:46.140449   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:46.174094   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:46.391020   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:46.577993   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:46.638638   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:46.673551   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:46.891100   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:47.077735   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:47.139247   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:47.173601   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:47.391050   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:47.580098   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:47.640258   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:47.674653   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:47.891066   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:48.078311   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:48.139594   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:48.173701   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:48.389940   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:48.579225   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:48.638932   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:48.674913   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:48.890980   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:49.078050   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:49.138739   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:49.173975   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:49.392518   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:49.578099   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:49.639854   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:49.674534   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:49.890274   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:50.078248   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:50.139149   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:50.173658   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:50.394871   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:50.578609   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:50.639776   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:50.675582   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:50.891284   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:51.080933   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:51.139495   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:51.173911   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:51.390256   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:51.579961   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:51.639764   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:51.674042   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:51.894306   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:52.184560   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:52.184953   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:52.187424   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:52.389726   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:52.578111   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:52.638577   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:52.673649   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:52.893123   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:53.078264   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:53.138382   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:53.174076   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:53.391688   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:53.577921   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:53.639376   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:53.673635   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:53.891301   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 14:23:54.078225   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:54.138863   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:54.174357   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:54.390361   12169 kapi.go:107] duration metric: took 51.504674408s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 14:23:54.577229   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:54.639302   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:54.673655   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:55.078055   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:55.139351   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:55.173757   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:55.578338   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:55.638916   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:55.674126   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:56.078082   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:56.139188   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:56.177136   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:56.579090   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:56.639294   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:56.674070   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:57.078679   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:57.139375   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:57.173409   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:57.765515   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:57.766060   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:57.766493   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:58.079510   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:58.139357   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:58.173663   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:58.578341   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:58.641243   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:58.673486   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:59.077824   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:59.139664   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:59.173880   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:23:59.577610   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:23:59.640471   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:23:59.674615   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:00.089084   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:00.147649   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:00.181957   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:00.607085   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:00.638996   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:00.674654   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:01.078830   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:01.139683   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:01.174678   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:01.582376   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:01.639195   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:01.674127   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:02.079991   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:02.138652   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:02.181639   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:02.578160   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:02.638772   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:02.675479   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:03.078174   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:03.375809   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:03.380167   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:03.577663   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:03.639357   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:03.674024   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:04.078079   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:04.138692   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:04.175546   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:04.578566   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:04.639335   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:04.675736   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:05.079197   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:05.140682   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:05.177342   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:05.578162   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:05.639367   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:05.673792   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:06.078003   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:06.138644   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:06.176198   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:06.754251   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:06.754689   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:06.754826   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:07.077709   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:07.139991   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:07.173837   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:07.577969   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:07.639965   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:07.674099   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:08.077790   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:08.140469   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:08.175987   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:08.579423   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:08.650983   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:08.674429   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:09.077728   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:09.151521   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:09.173915   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:09.586855   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:09.656758   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:09.675669   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:10.077285   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:10.139496   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:10.173914   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:10.578798   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:10.640569   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:10.679558   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:11.077333   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:11.139125   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:11.173193   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:11.578116   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:11.639305   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:11.674936   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:12.078294   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:12.139449   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:12.174132   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:12.577354   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:12.639325   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:12.682051   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:13.080952   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:13.141844   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:13.176464   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:13.577462   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:13.639073   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:13.677193   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:14.095590   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:14.141219   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:14.176853   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:14.731374   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:14.733292   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:14.749345   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:15.077556   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:15.141142   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:15.174621   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:15.577586   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:15.639607   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:15.675455   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:16.078423   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:16.139479   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:16.173958   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:16.578200   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:16.639511   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:16.674451   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:17.078025   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:17.139045   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:17.177630   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:17.578373   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:17.639379   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:17.673909   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:18.080185   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:18.160958   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:18.174707   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:18.790841   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:18.793352   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:18.794120   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:19.077790   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:19.139525   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:19.174000   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:19.578961   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:19.641521   12169 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 14:24:19.674485   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:20.077825   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:20.143928   12169 kapi.go:107] duration metric: took 1m14.009212902s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 14:24:20.175434   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:20.578048   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:20.678177   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:21.079242   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:21.174567   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:21.578717   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:21.673721   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:22.078947   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:22.174166   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:22.578158   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:22.674433   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:23.078170   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:23.183244   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:23.577745   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:23.673727   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:24.078933   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:24.184709   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:24.577552   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:24.680034   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:25.077975   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 14:24:25.181450   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:25.578555   12169 kapi.go:107] duration metric: took 1m16.504475345s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 14:24:25.580480   12169 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-018825 cluster.
	I0719 14:24:25.582128   12169 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 14:24:25.583526   12169 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 14:24:25.675289   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:26.177915   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:26.675615   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:27.176919   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:27.674799   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:28.173592   12169 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 14:24:28.675166   12169 kapi.go:107] duration metric: took 1m21.506864361s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 14:24:28.677070   12169 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, helm-tiller, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0719 14:24:28.678487   12169 addons.go:510] duration metric: took 1m31.815591478s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner helm-tiller storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0719 14:24:28.678534   12169 start.go:246] waiting for cluster config update ...
	I0719 14:24:28.678551   12169 start.go:255] writing updated cluster config ...
	I0719 14:24:28.678807   12169 ssh_runner.go:195] Run: rm -f paused
	I0719 14:24:28.732494   12169 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 14:24:28.734399   12169 out.go:177] * Done! kubectl is now configured to use "addons-018825" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.769414277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c79e9129f463c0309fd90ff9b38876b3c5a544d7e307b07981a971c8c422f0a,PodSandboxId:45805c6b967d961e1c6735116e68385497482e29bbff32542328d6cf541f9578,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721399229767556511,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xms8k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6,},Annotations:map[string]string{io.kubernetes.container.hash: 291aca0a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd94bfe746ea4b78e20f44c246e956a69394730c4b395c304936eb6419f0e63,PodSandboxId:32b308f7c4685423bcd52c889bac3d1df242a74550e550cfacdcb13aadc92217,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721399088077353431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e717529c-0e3d-45e0-a926-ef718c1b5993,},Annotations:map[string]string{io.kubernet
es.container.hash: 7a55caeb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe5104151282adb80de7f53c855a9618bf35fe93d56fa8bc15e18059f3c9c29,PodSandboxId:0222f2b642c6c32c76b4e09c1e861e29c1e14fb7f664c943fbe43a9d8c1a9c51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721399075992757790,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-zbbqp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8b18ef56-46ef-41f8-a085-3840463e848b,},Annotations:map[string]string{io.kubernetes.container.hash: 9bb964ab,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7,PodSandboxId:17abd8263031755aab6ee85264043bc7e8d6e79bdd3a34aea3d75833a3510996,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721399064085070163,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-jcn9w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 48482eee-1155-4bd1-815d-da6c964eb84b,},Annotations:map[string]string{io.kubernetes.container.hash: a0b8e1ab,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4e7846f214403e557a2e0d9c3560c754e30ae4cead349602afab457a5b134b,PodSandboxId:40d090ea3c19fb34c6248e05a81bd4236415cee2b5cada6ad79b79a00371259f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721399
039428328313,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-hw6vk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2dad5a45-80c8-4d63-aadc-d2166af16dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a8be92c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec,PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721399024638069798,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-p76dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f3616b2-3dcb-414f-930a-494df347f25f,},Annotations:map[string]string{io.kubernetes.container.hash: 557ea971,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d,PodSandboxId:2c2739e1b982d5074b92fcfabaf52125c29abf02b659aa5fcfec7c5a26b89c91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721398983939675151,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaff945-7762-4f72-9ca2-8d34dd65bf35,},Annotations:map[string]string{io.kubernetes.container.hash: 6018e0a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb,PodSandboxId:c3230c7b31066b79b685df03db3c8864db0b6180c12a9187331779ef31c686dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721398979612365622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-88nlf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6469d6b1-e474-4454-8359-e084930e879c,},Annotations:map[string]string{io.kubernetes.container.hash: 44476b1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f5795a1085,PodSandb
oxId:52e9714bd92975ec23e00fa14369a463bb4e15f8ff5d22641737bc63dadea087,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721398976905464359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qkf6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd641a61-241c-4387-86e8-432a465cb34d,},Annotations:map[string]string{io.kubernetes.container.hash: d1e7466e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a,PodSandboxId:3f8b6c88df5ee1404e310dcafcd2
42c1ef5e451c25e99402a58b0dc03b7d300c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721398957767810131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c11abc729c66d57f89d84b110e6d88,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860,PodSandboxId:c453e1c48b50c2bec77c372aaedf880f4cd8d56d7ef32
3090f51ebe002f73b11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721398957760469365,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea775724addcafb0355b562dd786d99d,},Annotations:map[string]string{io.kubernetes.container.hash: b0193324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767,PodSandboxId:bf669a39f6ed683018bfcb341d285ee52ed8941460f1cca75acda41afeb0308c,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721398957728984706,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f6fc113fd5e070230b8073bfafcb51,},Annotations:map[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b,PodSandboxId:075623588f2bc92065c01198b67a8f05997bfe5c4e6f2b887283fe4e8d5168e9,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721398957737089898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36d31b2a7c34ccb5f227ebdde65c177,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77b67325-5bd5-4b1f-8ac7-8b99cfa4548a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.807778622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4473238-51e0-4ec7-9789-456150d86e6e name=/runtime.v1.RuntimeService/Version
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.807873274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4473238-51e0-4ec7-9789-456150d86e6e name=/runtime.v1.RuntimeService/Version
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.811424422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6f1b935-4e8f-4047-b73b-d7362ea2788e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.812967545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721399426812933430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580634,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6f1b935-4e8f-4047-b73b-d7362ea2788e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.815269118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9342833-acd2-411f-8b06-f80107d3b4d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.815528576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9342833-acd2-411f-8b06-f80107d3b4d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.816291795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c79e9129f463c0309fd90ff9b38876b3c5a544d7e307b07981a971c8c422f0a,PodSandboxId:45805c6b967d961e1c6735116e68385497482e29bbff32542328d6cf541f9578,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721399229767556511,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-xms8k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ae5fa30-3f2a-4ac8-b7be-dfe19bd244a6,},Annotations:map[string]string{io.kubernetes.container.hash: 291aca0a,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd94bfe746ea4b78e20f44c246e956a69394730c4b395c304936eb6419f0e63,PodSandboxId:32b308f7c4685423bcd52c889bac3d1df242a74550e550cfacdcb13aadc92217,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721399088077353431,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e717529c-0e3d-45e0-a926-ef718c1b5993,},Annotations:map[string]string{io.kubernet
es.container.hash: 7a55caeb,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fe5104151282adb80de7f53c855a9618bf35fe93d56fa8bc15e18059f3c9c29,PodSandboxId:0222f2b642c6c32c76b4e09c1e861e29c1e14fb7f664c943fbe43a9d8c1a9c51,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721399075992757790,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-zbbqp,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 8b18ef56-46ef-41f8-a085-3840463e848b,},Annotations:map[string]string{io.kubernetes.container.hash: 9bb964ab,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7,PodSandboxId:17abd8263031755aab6ee85264043bc7e8d6e79bdd3a34aea3d75833a3510996,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721399064085070163,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-jcn9w,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 48482eee-1155-4bd1-815d-da6c964eb84b,},Annotations:map[string]string{io.kubernetes.container.hash: a0b8e1ab,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a4e7846f214403e557a2e0d9c3560c754e30ae4cead349602afab457a5b134b,PodSandboxId:40d090ea3c19fb34c6248e05a81bd4236415cee2b5cada6ad79b79a00371259f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721399
039428328313,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-hw6vk,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2dad5a45-80c8-4d63-aadc-d2166af16dc0,},Annotations:map[string]string{io.kubernetes.container.hash: 8a8be92c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec,PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721399024638069798,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-p76dw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f3616b2-3dcb-414f-930a-494df347f25f,},Annotations:map[string]string{io.kubernetes.container.hash: 557ea971,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d,PodSandboxId:2c2739e1b982d5074b92fcfabaf52125c29abf02b659aa5fcfec7c5a26b89c91,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721398983939675151,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaff945-7762-4f72-9ca2-8d34dd65bf35,},Annotations:map[string]string{io.kubernetes.container.hash: 6018e0a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb,PodSandboxId:c3230c7b31066b79b685df03db3c8864db0b6180c12a9187331779ef31c686dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721398979612365622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-88nlf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6469d6b1-e474-4454-8359-e084930e879c,},Annotations:map[string]string{io.kubernetes.container.hash: 44476b1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f5795a1085,PodSandb
oxId:52e9714bd92975ec23e00fa14369a463bb4e15f8ff5d22641737bc63dadea087,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721398976905464359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qkf6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd641a61-241c-4387-86e8-432a465cb34d,},Annotations:map[string]string{io.kubernetes.container.hash: d1e7466e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a,PodSandboxId:3f8b6c88df5ee1404e310dcafcd2
42c1ef5e451c25e99402a58b0dc03b7d300c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721398957767810131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c11abc729c66d57f89d84b110e6d88,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860,PodSandboxId:c453e1c48b50c2bec77c372aaedf880f4cd8d56d7ef32
3090f51ebe002f73b11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721398957760469365,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea775724addcafb0355b562dd786d99d,},Annotations:map[string]string{io.kubernetes.container.hash: b0193324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767,PodSandboxId:bf669a39f6ed683018bfcb341d285ee52ed8941460f1cca75acda41afeb0308c,Metadata:&ContainerMetadata
{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721398957728984706,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f6fc113fd5e070230b8073bfafcb51,},Annotations:map[string]string{io.kubernetes.container.hash: 836ad03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b,PodSandboxId:075623588f2bc92065c01198b67a8f05997bfe5c4e6f2b887283fe4e8d5168e9,Metadata:&ContainerMetadata{Name:kube-contro
ller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721398957737089898,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-018825,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b36d31b2a7c34ccb5f227ebdde65c177,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9342833-acd2-411f-8b06-f80107d3b4d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820447530Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec.CGY0Q2\"" file="server/server.go:805"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820546713Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec.CGY0Q2\"" file="server/server.go:805"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820573939Z" level=debug msg="Container or sandbox exited: 7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec.CGY0Q2" file="server/server.go:810"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820605650Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec\"" file="server/server.go:805"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820628311Z" level=debug msg="Container or sandbox exited: 7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec" file="server/server.go:810"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820646524Z" level=debug msg="container exited and found: 7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec" file="server/server.go:825"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.820684176Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec.CGY0Q2\"" file="server/server.go:805"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.822579644Z" level=debug msg="Unmounted container 7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec" file="storage/runtime.go:495" id=4e29176b-9985-4823-812a-475cbc6d7d3e name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.844570044Z" level=debug msg="Found exit code for 7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec: 0" file="oci/runtime_oci.go:1022"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.844763205Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:557ea971 io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"557ea971\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-07-19T14:23:44.638198435Z io.kubernetes.cri-o.IP.0:10.244.0.9 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a io.kubernetes.cri-o.ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62 io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-p76dw\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4f3616b2-3
dcb-414f-930a-494df347f25f\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-p76dw_4f3616b2-3dcb-414f-930a-494df347f25f/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/3ac7800da739746dad6a33431adb8ea214d660547e7dfff9292024481ec599b9/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-c59844bb4-p76dw_kube-system_4f3616b2-3dcb-414f-930a-494df347f25f_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-p76dw_kube-system_4f3616b2-3dcb-414f-930a-494df347f25f_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOn
ce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/4f3616b2-3dcb-414f-930a-494df347f25f/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4f3616b2-3dcb-414f-930a-494df347f25f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4f3616b2-3dcb-414f-930a-494df347f25f/containers/metrics-server/52eb4d22\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/4f3616b2-3dcb-414f-930a-494df347f25f/volumes/kubernetes.io~projected/kube-api-access-5f46f\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-c59844bb4-p76dw io.kubernetes.pod.na
mespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:4f3616b2-3dcb-414f-930a-494df347f25f kubernetes.io/config.seen:2024-07-19T14:23:02.496938353Z kubernetes.io/config.source:api]} Created:2024-07-19 14:23:44.691021117 +0000 UTC Started:2024-07-19 14:23:44.71760627 +0000 UTC m=+76.348196506 Finished:2024-07-19 14:30:26.819413359 +0000 UTC ExitCode:0xc0019deb90 OOMKilled:false SeccompKilled:false Error: InitPid:4751 InitStartTime:9827 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=4e29176b-9985-4823-812a-475cbc6d7d3e name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.848616133Z" level=info msg="Stopped container 7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec: kube-system/metrics-server-c59844bb4-p76dw/metrics-server" file="server/container_stop.go:29" id=4e29176b-9985-4823-812a-475cbc6d7d3e name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.849073419Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/7079bee0e4947820ffb13f8df553f9a00cf4f2410e02923155dfd5b4f381dcec\"" file="server/server.go:805"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.849117499Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=4e29176b-9985-4823-812a-475cbc6d7d3e name=/runtime.v1.RuntimeService/StopContainer
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.849744886Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3,}" file="otel-collector/interceptors.go:62" id=a1971169-c9b8-41d0-9a3c-bc211d148af5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.849798951Z" level=info msg="Stopping pod sandbox: 0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3" file="server/sandbox_stop.go:18" id=a1971169-c9b8-41d0-9a3c-bc211d148af5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.850424351Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-p76dw Namespace:kube-system ID:0b73c2002a246d0ea9a96647eca74d329a6b81e22e9f8063c5cb68c0d05365f3 UID:4f3616b2-3dcb-414f-930a-494df347f25f NetNS:/var/run/netns/f208c297-e437-428b-83de-79093c3dbc09 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod4f3616b2-3dcb-414f-930a-494df347f25f PodAnnotations:0xc000a24d08}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Jul 19 14:30:26 addons-018825 crio[682]: time="2024-07-19 14:30:26.850836340Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-p76dw from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c79e9129f463       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   45805c6b967d9       hello-world-app-6778b5fc9f-xms8k
	2dd94bfe746ea       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   32b308f7c4685       nginx
	9fe5104151282       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   0222f2b642c6c       headlamp-7867546754-zbbqp
	20d9dbb2af5f6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   17abd82630317       gcp-auth-5db96cd9b4-jcn9w
	4a4e7846f2144       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         6 minutes ago       Running             yakd                      0                   40d090ea3c19f       yakd-dashboard-799879c74f-hw6vk
	7079bee0e4947       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Exited              metrics-server            0                   0b73c2002a246       metrics-server-c59844bb4-p76dw
	822879e4213fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   2c2739e1b982d       storage-provisioner
	422891b0c477f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   c3230c7b31066       coredns-7db6d8ff4d-88nlf
	fd66a0731caf0       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        7 minutes ago       Running             kube-proxy                0                   52e9714bd9297       kube-proxy-qkf6b
	e3dc32aa02fb3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        7 minutes ago       Running             kube-scheduler            0                   3f8b6c88df5ee       kube-scheduler-addons-018825
	9144c07256374       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        7 minutes ago       Running             etcd                      0                   c453e1c48b50c       etcd-addons-018825
	b6ca310eec97b       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        7 minutes ago       Running             kube-controller-manager   0                   075623588f2bc       kube-controller-manager-addons-018825
	810c03a705d7a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        7 minutes ago       Running             kube-apiserver            0                   bf669a39f6ed6       kube-apiserver-addons-018825
	
	
	==> coredns [422891b0c477f893f39e295c21033a007c9961025c34d4da188b4abad176a8bb] <==
	[INFO] 10.244.0.8:40121 - 58535 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000174239s
	[INFO] 10.244.0.8:35746 - 24813 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106351s
	[INFO] 10.244.0.8:35746 - 29423 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055441s
	[INFO] 10.244.0.8:46291 - 52391 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122056s
	[INFO] 10.244.0.8:46291 - 58789 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080723s
	[INFO] 10.244.0.8:48091 - 57259 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145062s
	[INFO] 10.244.0.8:48091 - 23977 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000165056s
	[INFO] 10.244.0.8:37162 - 64064 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000065297s
	[INFO] 10.244.0.8:37162 - 21830 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000026607s
	[INFO] 10.244.0.8:49381 - 50366 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030695s
	[INFO] 10.244.0.8:49381 - 27323 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000021558s
	[INFO] 10.244.0.8:58214 - 54981 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027148s
	[INFO] 10.244.0.8:58214 - 60359 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000020888s
	[INFO] 10.244.0.8:53807 - 54883 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000026107s
	[INFO] 10.244.0.8:53807 - 64865 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000029304s
	[INFO] 10.244.0.22:44007 - 57426 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000643433s
	[INFO] 10.244.0.22:34435 - 64848 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000478536s
	[INFO] 10.244.0.22:56700 - 49705 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000679509s
	[INFO] 10.244.0.22:39456 - 13856 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106406s
	[INFO] 10.244.0.22:40810 - 33356 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000168688s
	[INFO] 10.244.0.22:54214 - 40721 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000310371s
	[INFO] 10.244.0.22:55579 - 26038 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000619934s
	[INFO] 10.244.0.22:54484 - 9458 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000790172s
	[INFO] 10.244.0.25:46992 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000422766s
	[INFO] 10.244.0.25:36367 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148274s
	
	
	==> describe nodes <==
	Name:               addons-018825
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-018825
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=addons-018825
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T14_22_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-018825
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:22:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-018825
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:30:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:27:17 +0000   Fri, 19 Jul 2024 14:22:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:27:17 +0000   Fri, 19 Jul 2024 14:22:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:27:17 +0000   Fri, 19 Jul 2024 14:22:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:27:17 +0000   Fri, 19 Jul 2024 14:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-018825
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 98d28a50f0df4400be945283e1dcebdb
	  System UUID:                98d28a50-f0df-4400-be94-5283e1dcebdb
	  Boot ID:                    c79de801-425e-4495-a3c6-178016b9936c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-xms8k         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  gcp-auth                    gcp-auth-5db96cd9b4-jcn9w                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  headlamp                    headlamp-7867546754-zbbqp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 coredns-7db6d8ff4d-88nlf                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m31s
	  kube-system                 etcd-addons-018825                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-apiserver-addons-018825             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  kube-system                 kube-controller-manager-addons-018825    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  kube-system                 kube-proxy-qkf6b                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-scheduler-addons-018825             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  yakd-dashboard              yakd-dashboard-799879c74f-hw6vk          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m50s (x8 over 7m50s)  kubelet          Node addons-018825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m50s (x8 over 7m50s)  kubelet          Node addons-018825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m50s (x7 over 7m50s)  kubelet          Node addons-018825 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m45s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m45s                  kubelet          Node addons-018825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s                  kubelet          Node addons-018825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s                  kubelet          Node addons-018825 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m44s                  kubelet          Node addons-018825 status is now: NodeReady
	  Normal  RegisteredNode           7m32s                  node-controller  Node addons-018825 event: Registered Node addons-018825 in Controller
	
	
	==> dmesg <==
	[ +14.020982] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.255562] systemd-fstab-generator[1528]: Ignoring "noauto" option for root device
	[Jul19 14:23] kauditd_printk_skb: 101 callbacks suppressed
	[  +5.016150] kauditd_printk_skb: 126 callbacks suppressed
	[  +7.481033] kauditd_printk_skb: 98 callbacks suppressed
	[ +20.002413] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.001009] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.350943] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.050033] kauditd_printk_skb: 2 callbacks suppressed
	[Jul19 14:24] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.283871] kauditd_printk_skb: 50 callbacks suppressed
	[  +9.225814] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.662219] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.063069] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.605583] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.614091] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.152793] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.471359] kauditd_printk_skb: 3 callbacks suppressed
	[Jul19 14:25] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.043853] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.803530] kauditd_printk_skb: 35 callbacks suppressed
	[Jul19 14:26] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.449891] kauditd_printk_skb: 33 callbacks suppressed
	[Jul19 14:27] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.940695] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [9144c072563740516fe72157b37119a3b56ee91c4edff4686e056cdb78898860] <==
	{"level":"warn","ts":"2024-07-19T14:24:18.775103Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.432035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85649"}
	{"level":"info","ts":"2024-07-19T14:24:18.776331Z","caller":"traceutil/trace.go:171","msg":"trace[1068217016] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1144; }","duration":"115.441773ms","start":"2024-07-19T14:24:18.660647Z","end":"2024-07-19T14:24:18.776089Z","steps":["trace[1068217016] 'agreement among raft nodes before linearized reading'  (duration: 114.350894ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:24:23.898179Z","caller":"traceutil/trace.go:171","msg":"trace[827142375] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"107.693303ms","start":"2024-07-19T14:24:23.790471Z","end":"2024-07-19T14:24:23.898164Z","steps":["trace[827142375] 'process raft request'  (duration: 107.367229ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:24:33.793909Z","caller":"traceutil/trace.go:171","msg":"trace[1475961601] linearizableReadLoop","detail":"{readStateIndex:1302; appliedIndex:1301; }","duration":"293.577313ms","start":"2024-07-19T14:24:33.500304Z","end":"2024-07-19T14:24:33.793881Z","steps":["trace[1475961601] 'read index received'  (duration: 293.447167ms)","trace[1475961601] 'applied index is now lower than readState.Index'  (duration: 129.686µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T14:24:33.794201Z","caller":"traceutil/trace.go:171","msg":"trace[2119557611] transaction","detail":"{read_only:false; response_revision:1263; number_of_response:1; }","duration":"456.508515ms","start":"2024-07-19T14:24:33.337679Z","end":"2024-07-19T14:24:33.794187Z","steps":["trace[2119557611] 'process raft request'  (duration: 456.115958ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:24:33.794332Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:24:33.33766Z","time spent":"456.570276ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-2ghzlkwnnipkmrn5gyeinw4bvu\" mod_revision:1175 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-2ghzlkwnnipkmrn5gyeinw4bvu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-2ghzlkwnnipkmrn5gyeinw4bvu\" > >"}
	{"level":"warn","ts":"2024-07-19T14:24:33.794473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.184984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T14:24:33.794552Z","caller":"traceutil/trace.go:171","msg":"trace[1897348490] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1263; }","duration":"294.302313ms","start":"2024-07-19T14:24:33.500243Z","end":"2024-07-19T14:24:33.794546Z","steps":["trace[1897348490] 'agreement among raft nodes before linearized reading'  (duration: 294.205552ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:24:35.833445Z","caller":"traceutil/trace.go:171","msg":"trace[1946019562] linearizableReadLoop","detail":"{readStateIndex:1316; appliedIndex:1315; }","duration":"307.492572ms","start":"2024-07-19T14:24:35.525938Z","end":"2024-07-19T14:24:35.83343Z","steps":["trace[1946019562] 'read index received'  (duration: 307.177936ms)","trace[1946019562] 'applied index is now lower than readState.Index'  (duration: 313.959µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T14:24:35.834121Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.179452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:88497"}
	{"level":"info","ts":"2024-07-19T14:24:35.834183Z","caller":"traceutil/trace.go:171","msg":"trace[276865736] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1277; }","duration":"308.275482ms","start":"2024-07-19T14:24:35.525896Z","end":"2024-07-19T14:24:35.834172Z","steps":["trace[276865736] 'agreement among raft nodes before linearized reading'  (duration: 307.963159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:24:35.834377Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:24:35.525882Z","time spent":"308.322884ms","remote":"127.0.0.1:44216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":19,"response size":88519,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-07-19T14:24:35.835042Z","caller":"traceutil/trace.go:171","msg":"trace[1836533746] transaction","detail":"{read_only:false; response_revision:1277; number_of_response:1; }","duration":"352.349393ms","start":"2024-07-19T14:24:35.482681Z","end":"2024-07-19T14:24:35.835031Z","steps":["trace[1836533746] 'process raft request'  (duration: 350.47517ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:24:35.83514Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:24:35.482666Z","time spent":"352.426108ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-018825\" mod_revision:1199 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-018825\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-018825\" > >"}
	{"level":"info","ts":"2024-07-19T14:24:47.716917Z","caller":"traceutil/trace.go:171","msg":"trace[1860969799] transaction","detail":"{read_only:false; response_revision:1407; number_of_response:1; }","duration":"165.098453ms","start":"2024-07-19T14:24:47.551781Z","end":"2024-07-19T14:24:47.71688Z","steps":["trace[1860969799] 'process raft request'  (duration: 164.932797ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:25:27.38522Z","caller":"traceutil/trace.go:171","msg":"trace[1381445707] transaction","detail":"{read_only:false; response_revision:1594; number_of_response:1; }","duration":"354.534015ms","start":"2024-07-19T14:25:27.030666Z","end":"2024-07-19T14:25:27.3852Z","steps":["trace[1381445707] 'process raft request'  (duration: 354.383611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:25:27.38546Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:25:27.030652Z","time spent":"354.645607ms","remote":"127.0.0.1:44308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1572 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-07-19T14:25:27.385672Z","caller":"traceutil/trace.go:171","msg":"trace[49061208] linearizableReadLoop","detail":"{readStateIndex:1647; appliedIndex:1647; }","duration":"256.615203ms","start":"2024-07-19T14:25:27.129041Z","end":"2024-07-19T14:25:27.385656Z","steps":["trace[49061208] 'read index received'  (duration: 256.611768ms)","trace[49061208] 'applied index is now lower than readState.Index'  (duration: 2.697µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T14:25:27.385827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.77681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T14:25:27.385855Z","caller":"traceutil/trace.go:171","msg":"trace[571064593] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1594; }","duration":"256.829999ms","start":"2024-07-19T14:25:27.129016Z","end":"2024-07-19T14:25:27.385846Z","steps":["trace[571064593] 'agreement among raft nodes before linearized reading'  (duration: 256.742365ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:25:27.386234Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.828289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-19T14:25:27.386265Z","caller":"traceutil/trace.go:171","msg":"trace[718249119] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1595; }","duration":"116.885818ms","start":"2024-07-19T14:25:27.26937Z","end":"2024-07-19T14:25:27.386256Z","steps":["trace[718249119] 'agreement among raft nodes before linearized reading'  (duration: 116.741526ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:25:27.386701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.619652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T14:25:27.386745Z","caller":"traceutil/trace.go:171","msg":"trace[333550768] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:0; response_revision:1595; }","duration":"106.697029ms","start":"2024-07-19T14:25:27.280041Z","end":"2024-07-19T14:25:27.386738Z","steps":["trace[333550768] 'agreement among raft nodes before linearized reading'  (duration: 106.603151ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:25:57.619831Z","caller":"traceutil/trace.go:171","msg":"trace[2132885807] transaction","detail":"{read_only:false; response_revision:1691; number_of_response:1; }","duration":"165.692667ms","start":"2024-07-19T14:25:57.45411Z","end":"2024-07-19T14:25:57.619803Z","steps":["trace[2132885807] 'process raft request'  (duration: 165.592658ms)"],"step_count":1}
	
	
	==> gcp-auth [20d9dbb2af5f6f29b03875ec31a7fd37bfdc397bb47ab7420733d7e0bcbe7fe7] <==
	2024/07/19 14:24:24 GCP Auth Webhook started!
	2024/07/19 14:24:29 Ready to marshal response ...
	2024/07/19 14:24:29 Ready to write response ...
	2024/07/19 14:24:29 Ready to marshal response ...
	2024/07/19 14:24:29 Ready to write response ...
	2024/07/19 14:24:29 Ready to marshal response ...
	2024/07/19 14:24:29 Ready to write response ...
	2024/07/19 14:24:33 Ready to marshal response ...
	2024/07/19 14:24:33 Ready to write response ...
	2024/07/19 14:24:39 Ready to marshal response ...
	2024/07/19 14:24:39 Ready to write response ...
	2024/07/19 14:24:42 Ready to marshal response ...
	2024/07/19 14:24:42 Ready to write response ...
	2024/07/19 14:25:04 Ready to marshal response ...
	2024/07/19 14:25:04 Ready to write response ...
	2024/07/19 14:25:04 Ready to marshal response ...
	2024/07/19 14:25:04 Ready to write response ...
	2024/07/19 14:25:17 Ready to marshal response ...
	2024/07/19 14:25:17 Ready to write response ...
	2024/07/19 14:25:19 Ready to marshal response ...
	2024/07/19 14:25:19 Ready to write response ...
	2024/07/19 14:25:52 Ready to marshal response ...
	2024/07/19 14:25:52 Ready to write response ...
	2024/07/19 14:27:06 Ready to marshal response ...
	2024/07/19 14:27:06 Ready to write response ...
	
	
	==> kernel <==
	 14:30:27 up 8 min,  0 users,  load average: 0.50, 0.68, 0.47
	Linux addons-018825 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [810c03a705d7ab8c627dea3560f5d43ce473476daf57901f7b12501e14664767] <==
	E0719 14:24:54.149100       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 14:24:54.149965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 14:24:58.156717       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 14:24:58.156838       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0719 14:24:58.156917       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.59.41:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.59.41:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded
	I0719 14:24:58.180242       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0719 14:24:58.188609       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	E0719 14:25:33.207702       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0719 14:25:33.824922       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0719 14:26:10.340128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.340294       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.373687       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.374125       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.385461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.385570       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.394782       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.394875       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 14:26:10.433055       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 14:26:10.433582       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0719 14:26:11.386917       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0719 14:26:11.433050       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0719 14:26:11.448207       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0719 14:27:06.852865       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.176.29"}
	
	
	==> kube-controller-manager [b6ca310eec97b0bd7311b17a10863a3efdded86e9aaec9faadac106df96a7c1b] <==
	W0719 14:28:22.911393       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:28:22.911548       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:28:23.684556       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:28:23.684657       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:28:50.111585       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:28:50.111854       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:28:58.853347       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:28:58.853480       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:29:14.534445       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:29:14.534574       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:29:16.957234       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:29:16.957350       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:29:30.783142       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:29:30.783203       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:29:40.051213       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:29:40.051356       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:29:52.699204       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:29:52.699452       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:29:54.874328       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:29:54.874380       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0719 14:30:02.842030       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:30:02.842137       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0719 14:30:25.677895       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.526µs"
	W0719 14:30:26.548733       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0719 14:30:26.548782       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [fd66a0731caf0e83867eec135dd629a1f4282be77ce5bc42ee7d52f5795a1085] <==
	I0719 14:22:57.511109       1 server_linux.go:69] "Using iptables proxy"
	I0719 14:22:57.526288       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0719 14:22:57.635469       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 14:22:57.635578       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 14:22:57.635596       1 server_linux.go:165] "Using iptables Proxier"
	I0719 14:22:57.642809       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 14:22:57.643032       1 server.go:872] "Version info" version="v1.30.3"
	I0719 14:22:57.643045       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:22:57.657832       1 config.go:192] "Starting service config controller"
	I0719 14:22:57.657851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 14:22:57.657892       1 config.go:101] "Starting endpoint slice config controller"
	I0719 14:22:57.657897       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 14:22:57.660150       1 config.go:319] "Starting node config controller"
	I0719 14:22:57.660161       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 14:22:57.758462       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 14:22:57.758565       1 shared_informer.go:320] Caches are synced for service config
	I0719 14:22:57.760608       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e3dc32aa02fb38b43b1aae9f3434fde74b891ada9886ab3b61c1766e7d1a8f1a] <==
	E0719 14:22:40.394089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:22:40.394118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 14:22:40.394229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:22:40.394261       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:22:40.394347       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 14:22:40.394391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 14:22:40.394874       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 14:22:40.394979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 14:22:41.219775       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:22:41.219891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:22:41.270371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 14:22:41.270587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 14:22:41.370297       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 14:22:41.370326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 14:22:41.423072       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 14:22:41.423183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 14:22:41.511864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 14:22:41.511922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 14:22:41.584885       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:22:41.584973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:22:41.602259       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 14:22:41.602734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 14:22:41.640759       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 14:22:41.640843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0719 14:22:41.983993       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 14:27:12 addons-018825 kubelet[1271]: I0719 14:27:12.893321    1271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd07cd6-48e5-4937-87f4-710c87412ac4" path="/var/lib/kubelet/pods/ebd07cd6-48e5-4937-87f4-710c87412ac4/volumes"
	Jul 19 14:27:42 addons-018825 kubelet[1271]: E0719 14:27:42.931039    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:27:42 addons-018825 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:27:42 addons-018825 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:27:42 addons-018825 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:27:42 addons-018825 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:27:43 addons-018825 kubelet[1271]: I0719 14:27:43.986318    1271 scope.go:117] "RemoveContainer" containerID="fac035dda401e0303701cd54a1fc03ced08976a403af563c6acdf8db18ab99cc"
	Jul 19 14:27:44 addons-018825 kubelet[1271]: I0719 14:27:44.006286    1271 scope.go:117] "RemoveContainer" containerID="9f6b1210c9e27bfbdc4b48fa3f1617ec520443c05a456d45c302ca42035bb408"
	Jul 19 14:28:42 addons-018825 kubelet[1271]: E0719 14:28:42.931967    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:28:42 addons-018825 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:28:42 addons-018825 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:28:42 addons-018825 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:28:42 addons-018825 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:29:42 addons-018825 kubelet[1271]: E0719 14:29:42.932817    1271 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:29:42 addons-018825 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:29:42 addons-018825 kubelet[1271]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:29:42 addons-018825 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:29:42 addons-018825 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:30:25 addons-018825 kubelet[1271]: I0719 14:30:25.702659    1271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-xms8k" podStartSLOduration=197.231257841 podStartE2EDuration="3m19.702611412s" podCreationTimestamp="2024-07-19 14:27:06 +0000 UTC" firstStartedPulling="2024-07-19 14:27:07.285815356 +0000 UTC m=+264.522577967" lastFinishedPulling="2024-07-19 14:27:09.757168911 +0000 UTC m=+266.993931538" observedRunningTime="2024-07-19 14:27:10.845998328 +0000 UTC m=+268.082760959" watchObservedRunningTime="2024-07-19 14:30:25.702611412 +0000 UTC m=+462.939374041"
	Jul 19 14:30:27 addons-018825 kubelet[1271]: I0719 14:30:27.123673    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4f3616b2-3dcb-414f-930a-494df347f25f-tmp-dir\") pod \"4f3616b2-3dcb-414f-930a-494df347f25f\" (UID: \"4f3616b2-3dcb-414f-930a-494df347f25f\") "
	Jul 19 14:30:27 addons-018825 kubelet[1271]: I0719 14:30:27.123741    1271 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f46f\" (UniqueName: \"kubernetes.io/projected/4f3616b2-3dcb-414f-930a-494df347f25f-kube-api-access-5f46f\") pod \"4f3616b2-3dcb-414f-930a-494df347f25f\" (UID: \"4f3616b2-3dcb-414f-930a-494df347f25f\") "
	Jul 19 14:30:27 addons-018825 kubelet[1271]: I0719 14:30:27.124212    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f3616b2-3dcb-414f-930a-494df347f25f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4f3616b2-3dcb-414f-930a-494df347f25f" (UID: "4f3616b2-3dcb-414f-930a-494df347f25f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 19 14:30:27 addons-018825 kubelet[1271]: I0719 14:30:27.126702    1271 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3616b2-3dcb-414f-930a-494df347f25f-kube-api-access-5f46f" (OuterVolumeSpecName: "kube-api-access-5f46f") pod "4f3616b2-3dcb-414f-930a-494df347f25f" (UID: "4f3616b2-3dcb-414f-930a-494df347f25f"). InnerVolumeSpecName "kube-api-access-5f46f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 14:30:27 addons-018825 kubelet[1271]: I0719 14:30:27.224536    1271 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4f3616b2-3dcb-414f-930a-494df347f25f-tmp-dir\") on node \"addons-018825\" DevicePath \"\""
	Jul 19 14:30:27 addons-018825 kubelet[1271]: I0719 14:30:27.224590    1271 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5f46f\" (UniqueName: \"kubernetes.io/projected/4f3616b2-3dcb-414f-930a-494df347f25f-kube-api-access-5f46f\") on node \"addons-018825\" DevicePath \"\""
	
	
	==> storage-provisioner [822879e4213fdfc4e71531053f10ef61e74f8f1eb9e4453240360854c29f227d] <==
	I0719 14:23:05.113118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 14:23:05.136225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 14:23:05.136296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 14:23:05.146406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 14:23:05.146666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-018825_a167b234-6848-455d-82c0-996c63c3021d!
	I0719 14:23:05.153022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b8c82fa-361f-4961-849c-fa9007c57d08", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-018825_a167b234-6848-455d-82c0-996c63c3021d became leader
	I0719 14:23:05.255872       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-018825_a167b234-6848-455d-82c0-996c63c3021d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-018825 -n addons-018825
helpers_test.go:261: (dbg) Run:  kubectl --context addons-018825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (353.48s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-018825
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-018825: exit status 82 (2m0.459320657s)

                                                
                                                
-- stdout --
	* Stopping node "addons-018825"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-018825" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-018825
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-018825: exit status 11 (21.45725526s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-018825" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-018825
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-018825: exit status 11 (6.143123007s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-018825" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-018825
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-018825: exit status 11 (6.144526987s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-018825" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 node stop m02 -v=7 --alsologtostderr
E0719 14:44:28.744488   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:45:12.876370   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.465504832s)

                                                
                                                
-- stdout --
	* Stopping node "ha-999305-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:44:20.008423   26895 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:44:20.008548   26895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:44:20.008557   26895 out.go:304] Setting ErrFile to fd 2...
	I0719 14:44:20.008561   26895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:44:20.008735   26895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:44:20.008962   26895 mustload.go:65] Loading cluster: ha-999305
	I0719 14:44:20.009322   26895 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:44:20.009337   26895 stop.go:39] StopHost: ha-999305-m02
	I0719 14:44:20.009664   26895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:44:20.009709   26895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:44:20.024862   26895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46431
	I0719 14:44:20.025412   26895 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:44:20.026068   26895 main.go:141] libmachine: Using API Version  1
	I0719 14:44:20.026098   26895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:44:20.026465   26895 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:44:20.028942   26895 out.go:177] * Stopping node "ha-999305-m02"  ...
	I0719 14:44:20.030540   26895 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 14:44:20.030579   26895 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:44:20.030830   26895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 14:44:20.030876   26895 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:44:20.033801   26895 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:44:20.034316   26895 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:44:20.034355   26895 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:44:20.034489   26895 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:44:20.034642   26895 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:44:20.034804   26895 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:44:20.034983   26895 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:44:20.123851   26895 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 14:44:20.179253   26895 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 14:44:20.234296   26895 main.go:141] libmachine: Stopping "ha-999305-m02"...
	I0719 14:44:20.234328   26895 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:44:20.236058   26895 main.go:141] libmachine: (ha-999305-m02) Calling .Stop
	I0719 14:44:20.239657   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 0/120
	I0719 14:44:21.240798   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 1/120
	I0719 14:44:22.242454   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 2/120
	I0719 14:44:23.244742   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 3/120
	I0719 14:44:24.246261   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 4/120
	I0719 14:44:25.248117   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 5/120
	I0719 14:44:26.249409   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 6/120
	I0719 14:44:27.251175   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 7/120
	I0719 14:44:28.252752   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 8/120
	I0719 14:44:29.254029   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 9/120
	I0719 14:44:30.256271   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 10/120
	I0719 14:44:31.257592   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 11/120
	I0719 14:44:32.259244   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 12/120
	I0719 14:44:33.260997   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 13/120
	I0719 14:44:34.262249   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 14/120
	I0719 14:44:35.264096   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 15/120
	I0719 14:44:36.266104   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 16/120
	I0719 14:44:37.267824   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 17/120
	I0719 14:44:38.269234   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 18/120
	I0719 14:44:39.270809   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 19/120
	I0719 14:44:40.273120   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 20/120
	I0719 14:44:41.274566   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 21/120
	I0719 14:44:42.276652   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 22/120
	I0719 14:44:43.277928   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 23/120
	I0719 14:44:44.279291   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 24/120
	I0719 14:44:45.281107   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 25/120
	I0719 14:44:46.283308   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 26/120
	I0719 14:44:47.284460   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 27/120
	I0719 14:44:48.285706   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 28/120
	I0719 14:44:49.287597   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 29/120
	I0719 14:44:50.289610   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 30/120
	I0719 14:44:51.290987   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 31/120
	I0719 14:44:52.292898   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 32/120
	I0719 14:44:53.294223   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 33/120
	I0719 14:44:54.295440   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 34/120
	I0719 14:44:55.297326   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 35/120
	I0719 14:44:56.298733   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 36/120
	I0719 14:44:57.300057   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 37/120
	I0719 14:44:58.301330   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 38/120
	I0719 14:44:59.302698   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 39/120
	I0719 14:45:00.304501   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 40/120
	I0719 14:45:01.305685   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 41/120
	I0719 14:45:02.307933   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 42/120
	I0719 14:45:03.309498   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 43/120
	I0719 14:45:04.311239   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 44/120
	I0719 14:45:05.312526   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 45/120
	I0719 14:45:06.314058   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 46/120
	I0719 14:45:07.315516   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 47/120
	I0719 14:45:08.317069   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 48/120
	I0719 14:45:09.318484   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 49/120
	I0719 14:45:10.319969   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 50/120
	I0719 14:45:11.321054   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 51/120
	I0719 14:45:12.322526   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 52/120
	I0719 14:45:13.325109   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 53/120
	I0719 14:45:14.327513   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 54/120
	I0719 14:45:15.329761   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 55/120
	I0719 14:45:16.331137   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 56/120
	I0719 14:45:17.332605   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 57/120
	I0719 14:45:18.334087   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 58/120
	I0719 14:45:19.335469   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 59/120
	I0719 14:45:20.337624   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 60/120
	I0719 14:45:21.338990   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 61/120
	I0719 14:45:22.340732   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 62/120
	I0719 14:45:23.342025   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 63/120
	I0719 14:45:24.343994   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 64/120
	I0719 14:45:25.345729   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 65/120
	I0719 14:45:26.347097   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 66/120
	I0719 14:45:27.348598   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 67/120
	I0719 14:45:28.350071   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 68/120
	I0719 14:45:29.351617   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 69/120
	I0719 14:45:30.353697   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 70/120
	I0719 14:45:31.355247   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 71/120
	I0719 14:45:32.356477   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 72/120
	I0719 14:45:33.357591   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 73/120
	I0719 14:45:34.359159   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 74/120
	I0719 14:45:35.360869   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 75/120
	I0719 14:45:36.362135   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 76/120
	I0719 14:45:37.363562   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 77/120
	I0719 14:45:38.365153   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 78/120
	I0719 14:45:39.366678   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 79/120
	I0719 14:45:40.368715   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 80/120
	I0719 14:45:41.370029   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 81/120
	I0719 14:45:42.371260   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 82/120
	I0719 14:45:43.372818   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 83/120
	I0719 14:45:44.374166   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 84/120
	I0719 14:45:45.376260   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 85/120
	I0719 14:45:46.377487   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 86/120
	I0719 14:45:47.379485   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 87/120
	I0719 14:45:48.381867   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 88/120
	I0719 14:45:49.383342   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 89/120
	I0719 14:45:50.385265   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 90/120
	I0719 14:45:51.386511   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 91/120
	I0719 14:45:52.387769   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 92/120
	I0719 14:45:53.389351   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 93/120
	I0719 14:45:54.390607   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 94/120
	I0719 14:45:55.392418   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 95/120
	I0719 14:45:56.393620   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 96/120
	I0719 14:45:57.395118   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 97/120
	I0719 14:45:58.396416   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 98/120
	I0719 14:45:59.398647   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 99/120
	I0719 14:46:00.401151   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 100/120
	I0719 14:46:01.402486   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 101/120
	I0719 14:46:02.403865   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 102/120
	I0719 14:46:03.405315   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 103/120
	I0719 14:46:04.406739   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 104/120
	I0719 14:46:05.408890   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 105/120
	I0719 14:46:06.410199   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 106/120
	I0719 14:46:07.411687   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 107/120
	I0719 14:46:08.413000   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 108/120
	I0719 14:46:09.414978   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 109/120
	I0719 14:46:10.416752   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 110/120
	I0719 14:46:11.418204   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 111/120
	I0719 14:46:12.420211   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 112/120
	I0719 14:46:13.421352   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 113/120
	I0719 14:46:14.423175   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 114/120
	I0719 14:46:15.424790   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 115/120
	I0719 14:46:16.426400   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 116/120
	I0719 14:46:17.427942   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 117/120
	I0719 14:46:18.429246   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 118/120
	I0719 14:46:19.431027   26895 main.go:141] libmachine: (ha-999305-m02) Waiting for machine to stop 119/120
	I0719 14:46:20.431547   26895 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 14:46:20.431664   26895 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-999305 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (19.114852545s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:46:20.474065   27344 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:46:20.474180   27344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:20.474191   27344 out.go:304] Setting ErrFile to fd 2...
	I0719 14:46:20.474197   27344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:20.474405   27344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:46:20.474609   27344 out.go:298] Setting JSON to false
	I0719 14:46:20.474642   27344 mustload.go:65] Loading cluster: ha-999305
	I0719 14:46:20.474769   27344 notify.go:220] Checking for updates...
	I0719 14:46:20.475098   27344 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:46:20.475113   27344 status.go:255] checking status of ha-999305 ...
	I0719 14:46:20.475491   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:20.475555   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:20.494072   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0719 14:46:20.494541   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:20.495056   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:20.495075   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:20.495396   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:20.495532   27344 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:46:20.497023   27344 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:46:20.497044   27344 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:20.497310   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:20.497341   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:20.512479   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
	I0719 14:46:20.512859   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:20.513389   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:20.513409   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:20.513711   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:20.513932   27344 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:46:20.516705   27344 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:20.517096   27344 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:20.517133   27344 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:20.517236   27344 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:20.517641   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:20.517691   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:20.531795   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0719 14:46:20.532163   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:20.532588   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:20.532619   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:20.532929   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:20.533117   27344 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:46:20.533300   27344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:20.533339   27344 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:46:20.535820   27344 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:20.536182   27344 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:20.536208   27344 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:20.536279   27344 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:46:20.536430   27344 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:46:20.536571   27344 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:46:20.536736   27344 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:46:20.626433   27344 ssh_runner.go:195] Run: systemctl --version
	I0719 14:46:20.635224   27344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:20.652261   27344 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:20.652294   27344 api_server.go:166] Checking apiserver status ...
	I0719 14:46:20.652324   27344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:20.671156   27344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:46:20.680773   27344 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:20.680823   27344 ssh_runner.go:195] Run: ls
	I0719 14:46:20.685220   27344 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:20.691332   27344 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:20.691351   27344 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:46:20.691360   27344 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:20.691375   27344 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:46:20.691700   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:20.691743   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:20.707366   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0719 14:46:20.707790   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:20.708234   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:20.708250   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:20.708579   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:20.708776   27344 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:46:20.710446   27344 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:46:20.710462   27344 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:20.710859   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:20.710900   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:20.725341   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0719 14:46:20.725780   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:20.726185   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:20.726208   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:20.726527   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:20.726746   27344 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:46:20.729474   27344 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:20.729929   27344 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:20.729951   27344 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:20.730119   27344 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:20.730420   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:20.730451   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:20.744981   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I0719 14:46:20.745338   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:20.745811   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:20.745831   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:20.746131   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:20.746315   27344 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:46:20.746492   27344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:20.746514   27344 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:46:20.749161   27344 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:20.749602   27344 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:20.749637   27344 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:20.749754   27344 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:46:20.749960   27344 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:46:20.750105   27344 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:46:20.750283   27344 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:46:39.186433   27344 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:46:39.186536   27344 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:46:39.186559   27344 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:39.186572   27344 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:46:39.186594   27344 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:39.186624   27344 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:46:39.186954   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:39.187003   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:39.201509   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37193
	I0719 14:46:39.201976   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:39.202391   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:39.202410   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:39.202724   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:39.202901   27344 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:46:39.204294   27344 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:46:39.204313   27344 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:46:39.204623   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:39.204659   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:39.220537   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0719 14:46:39.221010   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:39.221455   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:39.221470   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:39.221762   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:39.221914   27344 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:46:39.224716   27344 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:39.225102   27344 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:46:39.225129   27344 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:39.225223   27344 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:46:39.225502   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:39.225551   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:39.240929   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0719 14:46:39.241373   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:39.241780   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:39.241801   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:39.242080   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:39.242230   27344 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:46:39.242404   27344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:39.242426   27344 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:46:39.245108   27344 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:39.245707   27344 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:46:39.245733   27344 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:39.245935   27344 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:46:39.246109   27344 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:46:39.246354   27344 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:46:39.246513   27344 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:46:39.332539   27344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:39.349466   27344 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:39.349497   27344 api_server.go:166] Checking apiserver status ...
	I0719 14:46:39.349525   27344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:39.364268   27344 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:46:39.376626   27344 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:39.376680   27344 ssh_runner.go:195] Run: ls
	I0719 14:46:39.381206   27344 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:39.385475   27344 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:39.385494   27344 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:46:39.385502   27344 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:39.385515   27344 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:46:39.385792   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:39.385828   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:39.401300   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I0719 14:46:39.401723   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:39.402181   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:39.402201   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:39.402569   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:39.402759   27344 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:46:39.404177   27344 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:46:39.404193   27344 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:46:39.404469   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:39.404499   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:39.418216   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0719 14:46:39.418631   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:39.419038   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:39.419063   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:39.419321   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:39.419497   27344 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:46:39.422086   27344 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:39.422501   27344 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:46:39.422529   27344 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:39.422636   27344 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:46:39.422923   27344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:39.422956   27344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:39.437079   27344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0719 14:46:39.437580   27344 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:39.438091   27344 main.go:141] libmachine: Using API Version  1
	I0719 14:46:39.438116   27344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:39.438492   27344 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:39.438707   27344 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:46:39.438930   27344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:39.438950   27344 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:46:39.441843   27344 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:39.442278   27344 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:46:39.442293   27344 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:39.442421   27344 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:46:39.442583   27344 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:46:39.442751   27344 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:46:39.442918   27344 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:46:39.527246   27344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:39.545869   27344 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-999305 -n ha-999305
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-999305 logs -n 25: (1.40196669s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305:/home/docker/cp-test_ha-999305-m03_ha-999305.txt                      |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305 sudo cat                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305.txt                                |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m04 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp testdata/cp-test.txt                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305:/home/docker/cp-test_ha-999305-m04_ha-999305.txt                      |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305 sudo cat                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305.txt                                |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03:/home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m03 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-999305 node stop m02 -v=7                                                    | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:38:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:38:27.765006   22606 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:38:27.765117   22606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:38:27.765126   22606 out.go:304] Setting ErrFile to fd 2...
	I0719 14:38:27.765130   22606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:38:27.765290   22606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:38:27.765798   22606 out.go:298] Setting JSON to false
	I0719 14:38:27.766611   22606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1254,"bootTime":1721398654,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:38:27.766664   22606 start.go:139] virtualization: kvm guest
	I0719 14:38:27.769503   22606 out.go:177] * [ha-999305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:38:27.771032   22606 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:38:27.771040   22606 notify.go:220] Checking for updates...
	I0719 14:38:27.772433   22606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:38:27.773676   22606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:38:27.774784   22606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:38:27.775922   22606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:38:27.777176   22606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:38:27.778492   22606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:38:27.811750   22606 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 14:38:27.813006   22606 start.go:297] selected driver: kvm2
	I0719 14:38:27.813016   22606 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:38:27.813026   22606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:38:27.813652   22606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:38:27.813725   22606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:38:27.827592   22606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:38:27.827638   22606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:38:27.827824   22606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:38:27.827873   22606 cni.go:84] Creating CNI manager for ""
	I0719 14:38:27.827884   22606 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 14:38:27.827889   22606 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 14:38:27.827960   22606 start.go:340] cluster config:
	{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0719 14:38:27.828052   22606 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:38:27.830672   22606 out.go:177] * Starting "ha-999305" primary control-plane node in "ha-999305" cluster
	I0719 14:38:27.831782   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:38:27.831806   22606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:38:27.831812   22606 cache.go:56] Caching tarball of preloaded images
	I0719 14:38:27.831873   22606 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:38:27.831882   22606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:38:27.832170   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:38:27.832189   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json: {Name:mkc4d7b141210cfb52ece9bf78a8c556f395293d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:27.832311   22606 start.go:360] acquireMachinesLock for ha-999305: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:38:27.832339   22606 start.go:364] duration metric: took 14.571µs to acquireMachinesLock for "ha-999305"
	I0719 14:38:27.832354   22606 start.go:93] Provisioning new machine with config: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:38:27.832414   22606 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 14:38:27.834522   22606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 14:38:27.834635   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:38:27.834665   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:38:27.847897   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0719 14:38:27.848323   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:38:27.848912   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:38:27.848935   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:38:27.849226   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:38:27.849416   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:27.849537   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:27.849644   22606 start.go:159] libmachine.API.Create for "ha-999305" (driver="kvm2")
	I0719 14:38:27.849662   22606 client.go:168] LocalClient.Create starting
	I0719 14:38:27.849686   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:38:27.849711   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:38:27.849730   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:38:27.849772   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:38:27.849789   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:38:27.849799   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:38:27.849816   22606 main.go:141] libmachine: Running pre-create checks...
	I0719 14:38:27.849823   22606 main.go:141] libmachine: (ha-999305) Calling .PreCreateCheck
	I0719 14:38:27.850098   22606 main.go:141] libmachine: (ha-999305) Calling .GetConfigRaw
	I0719 14:38:27.850513   22606 main.go:141] libmachine: Creating machine...
	I0719 14:38:27.850530   22606 main.go:141] libmachine: (ha-999305) Calling .Create
	I0719 14:38:27.850636   22606 main.go:141] libmachine: (ha-999305) Creating KVM machine...
	I0719 14:38:27.851824   22606 main.go:141] libmachine: (ha-999305) DBG | found existing default KVM network
	I0719 14:38:27.852427   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:27.852314   22629 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0719 14:38:27.852449   22606 main.go:141] libmachine: (ha-999305) DBG | created network xml: 
	I0719 14:38:27.852461   22606 main.go:141] libmachine: (ha-999305) DBG | <network>
	I0719 14:38:27.852467   22606 main.go:141] libmachine: (ha-999305) DBG |   <name>mk-ha-999305</name>
	I0719 14:38:27.852476   22606 main.go:141] libmachine: (ha-999305) DBG |   <dns enable='no'/>
	I0719 14:38:27.852487   22606 main.go:141] libmachine: (ha-999305) DBG |   
	I0719 14:38:27.852499   22606 main.go:141] libmachine: (ha-999305) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 14:38:27.852514   22606 main.go:141] libmachine: (ha-999305) DBG |     <dhcp>
	I0719 14:38:27.852520   22606 main.go:141] libmachine: (ha-999305) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 14:38:27.852526   22606 main.go:141] libmachine: (ha-999305) DBG |     </dhcp>
	I0719 14:38:27.852533   22606 main.go:141] libmachine: (ha-999305) DBG |   </ip>
	I0719 14:38:27.852539   22606 main.go:141] libmachine: (ha-999305) DBG |   
	I0719 14:38:27.852544   22606 main.go:141] libmachine: (ha-999305) DBG | </network>
	I0719 14:38:27.852551   22606 main.go:141] libmachine: (ha-999305) DBG | 
	I0719 14:38:27.858073   22606 main.go:141] libmachine: (ha-999305) DBG | trying to create private KVM network mk-ha-999305 192.168.39.0/24...
	I0719 14:38:27.918530   22606 main.go:141] libmachine: (ha-999305) DBG | private KVM network mk-ha-999305 192.168.39.0/24 created
	I0719 14:38:27.918562   22606 main.go:141] libmachine: (ha-999305) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305 ...
	I0719 14:38:27.918585   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:27.918519   22629 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:38:27.918602   22606 main.go:141] libmachine: (ha-999305) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:38:27.918738   22606 main.go:141] libmachine: (ha-999305) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:38:28.144018   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:28.143897   22629 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa...
	I0719 14:38:28.331688   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:28.331580   22629 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/ha-999305.rawdisk...
	I0719 14:38:28.331715   22606 main.go:141] libmachine: (ha-999305) DBG | Writing magic tar header
	I0719 14:38:28.331724   22606 main.go:141] libmachine: (ha-999305) DBG | Writing SSH key tar header
	I0719 14:38:28.331732   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:28.331705   22629 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305 ...
	I0719 14:38:28.331855   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305
	I0719 14:38:28.331885   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305 (perms=drwx------)
	I0719 14:38:28.331895   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:38:28.331909   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:38:28.331918   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:38:28.331931   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:38:28.331942   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:38:28.331951   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home
	I0719 14:38:28.331963   22606 main.go:141] libmachine: (ha-999305) DBG | Skipping /home - not owner
	I0719 14:38:28.331973   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:38:28.331985   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:38:28.331994   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:38:28.332009   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:38:28.332020   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:38:28.332031   22606 main.go:141] libmachine: (ha-999305) Creating domain...
	I0719 14:38:28.332951   22606 main.go:141] libmachine: (ha-999305) define libvirt domain using xml: 
	I0719 14:38:28.332977   22606 main.go:141] libmachine: (ha-999305) <domain type='kvm'>
	I0719 14:38:28.332987   22606 main.go:141] libmachine: (ha-999305)   <name>ha-999305</name>
	I0719 14:38:28.333003   22606 main.go:141] libmachine: (ha-999305)   <memory unit='MiB'>2200</memory>
	I0719 14:38:28.333017   22606 main.go:141] libmachine: (ha-999305)   <vcpu>2</vcpu>
	I0719 14:38:28.333029   22606 main.go:141] libmachine: (ha-999305)   <features>
	I0719 14:38:28.333041   22606 main.go:141] libmachine: (ha-999305)     <acpi/>
	I0719 14:38:28.333066   22606 main.go:141] libmachine: (ha-999305)     <apic/>
	I0719 14:38:28.333085   22606 main.go:141] libmachine: (ha-999305)     <pae/>
	I0719 14:38:28.333105   22606 main.go:141] libmachine: (ha-999305)     
	I0719 14:38:28.333113   22606 main.go:141] libmachine: (ha-999305)   </features>
	I0719 14:38:28.333118   22606 main.go:141] libmachine: (ha-999305)   <cpu mode='host-passthrough'>
	I0719 14:38:28.333125   22606 main.go:141] libmachine: (ha-999305)   
	I0719 14:38:28.333130   22606 main.go:141] libmachine: (ha-999305)   </cpu>
	I0719 14:38:28.333137   22606 main.go:141] libmachine: (ha-999305)   <os>
	I0719 14:38:28.333142   22606 main.go:141] libmachine: (ha-999305)     <type>hvm</type>
	I0719 14:38:28.333149   22606 main.go:141] libmachine: (ha-999305)     <boot dev='cdrom'/>
	I0719 14:38:28.333153   22606 main.go:141] libmachine: (ha-999305)     <boot dev='hd'/>
	I0719 14:38:28.333161   22606 main.go:141] libmachine: (ha-999305)     <bootmenu enable='no'/>
	I0719 14:38:28.333167   22606 main.go:141] libmachine: (ha-999305)   </os>
	I0719 14:38:28.333180   22606 main.go:141] libmachine: (ha-999305)   <devices>
	I0719 14:38:28.333191   22606 main.go:141] libmachine: (ha-999305)     <disk type='file' device='cdrom'>
	I0719 14:38:28.333220   22606 main.go:141] libmachine: (ha-999305)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/boot2docker.iso'/>
	I0719 14:38:28.333240   22606 main.go:141] libmachine: (ha-999305)       <target dev='hdc' bus='scsi'/>
	I0719 14:38:28.333261   22606 main.go:141] libmachine: (ha-999305)       <readonly/>
	I0719 14:38:28.333281   22606 main.go:141] libmachine: (ha-999305)     </disk>
	I0719 14:38:28.333296   22606 main.go:141] libmachine: (ha-999305)     <disk type='file' device='disk'>
	I0719 14:38:28.333309   22606 main.go:141] libmachine: (ha-999305)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:38:28.333323   22606 main.go:141] libmachine: (ha-999305)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/ha-999305.rawdisk'/>
	I0719 14:38:28.333336   22606 main.go:141] libmachine: (ha-999305)       <target dev='hda' bus='virtio'/>
	I0719 14:38:28.333348   22606 main.go:141] libmachine: (ha-999305)     </disk>
	I0719 14:38:28.333364   22606 main.go:141] libmachine: (ha-999305)     <interface type='network'>
	I0719 14:38:28.333379   22606 main.go:141] libmachine: (ha-999305)       <source network='mk-ha-999305'/>
	I0719 14:38:28.333391   22606 main.go:141] libmachine: (ha-999305)       <model type='virtio'/>
	I0719 14:38:28.333405   22606 main.go:141] libmachine: (ha-999305)     </interface>
	I0719 14:38:28.333417   22606 main.go:141] libmachine: (ha-999305)     <interface type='network'>
	I0719 14:38:28.333431   22606 main.go:141] libmachine: (ha-999305)       <source network='default'/>
	I0719 14:38:28.333448   22606 main.go:141] libmachine: (ha-999305)       <model type='virtio'/>
	I0719 14:38:28.333469   22606 main.go:141] libmachine: (ha-999305)     </interface>
	I0719 14:38:28.333480   22606 main.go:141] libmachine: (ha-999305)     <serial type='pty'>
	I0719 14:38:28.333494   22606 main.go:141] libmachine: (ha-999305)       <target port='0'/>
	I0719 14:38:28.333505   22606 main.go:141] libmachine: (ha-999305)     </serial>
	I0719 14:38:28.333517   22606 main.go:141] libmachine: (ha-999305)     <console type='pty'>
	I0719 14:38:28.333525   22606 main.go:141] libmachine: (ha-999305)       <target type='serial' port='0'/>
	I0719 14:38:28.333533   22606 main.go:141] libmachine: (ha-999305)     </console>
	I0719 14:38:28.333540   22606 main.go:141] libmachine: (ha-999305)     <rng model='virtio'>
	I0719 14:38:28.333546   22606 main.go:141] libmachine: (ha-999305)       <backend model='random'>/dev/random</backend>
	I0719 14:38:28.333552   22606 main.go:141] libmachine: (ha-999305)     </rng>
	I0719 14:38:28.333556   22606 main.go:141] libmachine: (ha-999305)     
	I0719 14:38:28.333562   22606 main.go:141] libmachine: (ha-999305)     
	I0719 14:38:28.333567   22606 main.go:141] libmachine: (ha-999305)   </devices>
	I0719 14:38:28.333574   22606 main.go:141] libmachine: (ha-999305) </domain>
	I0719 14:38:28.333580   22606 main.go:141] libmachine: (ha-999305) 
	I0719 14:38:28.337739   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:e7:36:0d in network default
	I0719 14:38:28.338175   22606 main.go:141] libmachine: (ha-999305) Ensuring networks are active...
	I0719 14:38:28.338194   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:28.338905   22606 main.go:141] libmachine: (ha-999305) Ensuring network default is active
	I0719 14:38:28.339205   22606 main.go:141] libmachine: (ha-999305) Ensuring network mk-ha-999305 is active
	I0719 14:38:28.339633   22606 main.go:141] libmachine: (ha-999305) Getting domain xml...
	I0719 14:38:28.340215   22606 main.go:141] libmachine: (ha-999305) Creating domain...
	I0719 14:38:29.493645   22606 main.go:141] libmachine: (ha-999305) Waiting to get IP...
	I0719 14:38:29.494268   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:29.494651   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:29.494674   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:29.494599   22629 retry.go:31] will retry after 295.963865ms: waiting for machine to come up
	I0719 14:38:29.792057   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:29.792405   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:29.792423   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:29.792360   22629 retry.go:31] will retry after 387.809257ms: waiting for machine to come up
	I0719 14:38:30.181895   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:30.182366   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:30.182410   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:30.182334   22629 retry.go:31] will retry after 306.839378ms: waiting for machine to come up
	I0719 14:38:30.490760   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:30.491198   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:30.491227   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:30.491149   22629 retry.go:31] will retry after 425.660464ms: waiting for machine to come up
	I0719 14:38:30.918594   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:30.918991   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:30.919012   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:30.918949   22629 retry.go:31] will retry after 501.872394ms: waiting for machine to come up
	I0719 14:38:31.422669   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:31.423199   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:31.423220   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:31.423161   22629 retry.go:31] will retry after 953.109864ms: waiting for machine to come up
	I0719 14:38:32.377483   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:32.377897   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:32.377944   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:32.377834   22629 retry.go:31] will retry after 717.613082ms: waiting for machine to come up
	I0719 14:38:33.097393   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:33.097744   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:33.097775   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:33.097692   22629 retry.go:31] will retry after 1.362631393s: waiting for machine to come up
	I0719 14:38:34.462110   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:34.462632   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:34.462652   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:34.462596   22629 retry.go:31] will retry after 1.619727371s: waiting for machine to come up
	I0719 14:38:36.084335   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:36.084838   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:36.084862   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:36.084766   22629 retry.go:31] will retry after 1.838449443s: waiting for machine to come up
	I0719 14:38:37.924319   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:37.924749   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:37.924764   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:37.924690   22629 retry.go:31] will retry after 2.845704536s: waiting for machine to come up
	I0719 14:38:40.773565   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:40.773913   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:40.773937   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:40.773887   22629 retry.go:31] will retry after 3.088536072s: waiting for machine to come up
	I0719 14:38:43.863936   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:43.864398   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:43.864427   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:43.864363   22629 retry.go:31] will retry after 3.174729971s: waiting for machine to come up
	I0719 14:38:47.042692   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.043188   22606 main.go:141] libmachine: (ha-999305) Found IP for machine: 192.168.39.240
	I0719 14:38:47.043210   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has current primary IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.043219   22606 main.go:141] libmachine: (ha-999305) Reserving static IP address...
	I0719 14:38:47.043580   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find host DHCP lease matching {name: "ha-999305", mac: "52:54:00:c3:55:82", ip: "192.168.39.240"} in network mk-ha-999305
	I0719 14:38:47.115495   22606 main.go:141] libmachine: (ha-999305) DBG | Getting to WaitForSSH function...
	I0719 14:38:47.115527   22606 main.go:141] libmachine: (ha-999305) Reserved static IP address: 192.168.39.240
	I0719 14:38:47.115539   22606 main.go:141] libmachine: (ha-999305) Waiting for SSH to be available...
	I0719 14:38:47.118059   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.118362   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.118391   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.118503   22606 main.go:141] libmachine: (ha-999305) DBG | Using SSH client type: external
	I0719 14:38:47.118546   22606 main.go:141] libmachine: (ha-999305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa (-rw-------)
	I0719 14:38:47.118579   22606 main.go:141] libmachine: (ha-999305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:38:47.118592   22606 main.go:141] libmachine: (ha-999305) DBG | About to run SSH command:
	I0719 14:38:47.118644   22606 main.go:141] libmachine: (ha-999305) DBG | exit 0
	I0719 14:38:47.246392   22606 main.go:141] libmachine: (ha-999305) DBG | SSH cmd err, output: <nil>: 
	I0719 14:38:47.246642   22606 main.go:141] libmachine: (ha-999305) KVM machine creation complete!
	I0719 14:38:47.246965   22606 main.go:141] libmachine: (ha-999305) Calling .GetConfigRaw
	I0719 14:38:47.247662   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:47.247932   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:47.248069   22606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:38:47.248082   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:38:47.249402   22606 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:38:47.249415   22606 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:38:47.249420   22606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:38:47.249426   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.251491   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.251876   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.251905   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.252078   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.252243   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.252398   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.252537   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.252693   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.252934   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.252950   22606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:38:47.353484   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:38:47.353506   22606 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:38:47.353513   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.356224   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.356483   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.356522   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.356650   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.356875   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.357030   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.357168   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.357339   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.357500   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.357511   22606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:38:47.459016   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:38:47.459102   22606 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:38:47.459116   22606 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:38:47.459127   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:47.459414   22606 buildroot.go:166] provisioning hostname "ha-999305"
	I0719 14:38:47.459444   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:47.459631   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.462132   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.462441   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.462467   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.462620   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.462786   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.462939   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.463058   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.463188   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.463435   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.463458   22606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305 && echo "ha-999305" | sudo tee /etc/hostname
	I0719 14:38:47.580240   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305
	
	I0719 14:38:47.580268   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.582743   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.582986   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.583015   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.583171   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.583357   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.583515   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.583662   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.583784   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.583963   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.583978   22606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:38:47.695077   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:38:47.695106   22606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:38:47.695146   22606 buildroot.go:174] setting up certificates
	I0719 14:38:47.695160   22606 provision.go:84] configureAuth start
	I0719 14:38:47.695178   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:47.695452   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:47.698001   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.698345   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.698368   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.698506   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.700248   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.700536   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.700560   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.700699   22606 provision.go:143] copyHostCerts
	I0719 14:38:47.700737   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:38:47.700774   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:38:47.700786   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:38:47.700866   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:38:47.700979   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:38:47.701007   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:38:47.701017   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:38:47.701057   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:38:47.701129   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:38:47.701153   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:38:47.701161   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:38:47.701199   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:38:47.701284   22606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305 san=[127.0.0.1 192.168.39.240 ha-999305 localhost minikube]
	I0719 14:38:47.802791   22606 provision.go:177] copyRemoteCerts
	I0719 14:38:47.802843   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:38:47.802876   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.805089   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.805452   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.805486   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.805646   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.805850   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.806018   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.806219   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:47.888214   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:38:47.888293   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:38:47.913325   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:38:47.913403   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0719 14:38:47.936706   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:38:47.936767   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 14:38:47.959625   22606 provision.go:87] duration metric: took 264.451004ms to configureAuth
	I0719 14:38:47.959664   22606 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:38:47.959864   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:38:47.959932   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.962555   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.962980   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.963003   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.963203   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.963516   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.963686   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.963824   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.964050   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.964233   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.964253   22606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:38:48.223779   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:38:48.223811   22606 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:38:48.223821   22606 main.go:141] libmachine: (ha-999305) Calling .GetURL
	I0719 14:38:48.225043   22606 main.go:141] libmachine: (ha-999305) DBG | Using libvirt version 6000000
	I0719 14:38:48.227409   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.227726   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.227746   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.227905   22606 main.go:141] libmachine: Docker is up and running!
	I0719 14:38:48.227915   22606 main.go:141] libmachine: Reticulating splines...
	I0719 14:38:48.227921   22606 client.go:171] duration metric: took 20.378250961s to LocalClient.Create
	I0719 14:38:48.227941   22606 start.go:167] duration metric: took 20.378296192s to libmachine.API.Create "ha-999305"
	I0719 14:38:48.227952   22606 start.go:293] postStartSetup for "ha-999305" (driver="kvm2")
	I0719 14:38:48.227964   22606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:38:48.227981   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.228194   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:38:48.228222   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.230468   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.230765   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.230802   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.230952   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.231116   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.231279   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.231433   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:48.317347   22606 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:38:48.321759   22606 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:38:48.321782   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:38:48.321837   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:38:48.321930   22606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:38:48.321947   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:38:48.322071   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:38:48.331283   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:38:48.354411   22606 start.go:296] duration metric: took 126.447804ms for postStartSetup
	I0719 14:38:48.354456   22606 main.go:141] libmachine: (ha-999305) Calling .GetConfigRaw
	I0719 14:38:48.354981   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:48.357345   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.357624   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.357652   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.357853   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:38:48.358033   22606 start.go:128] duration metric: took 20.525608686s to createHost
	I0719 14:38:48.358061   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.360195   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.360459   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.360482   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.360587   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.360766   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.360930   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.361091   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.361370   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:48.361555   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:48.361566   22606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:38:48.462972   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399928.436594017
	
	I0719 14:38:48.462998   22606 fix.go:216] guest clock: 1721399928.436594017
	I0719 14:38:48.463010   22606 fix.go:229] Guest: 2024-07-19 14:38:48.436594017 +0000 UTC Remote: 2024-07-19 14:38:48.358048748 +0000 UTC m=+20.625559847 (delta=78.545269ms)
	I0719 14:38:48.463035   22606 fix.go:200] guest clock delta is within tolerance: 78.545269ms
	I0719 14:38:48.463044   22606 start.go:83] releasing machines lock for "ha-999305", held for 20.630696786s
	I0719 14:38:48.463068   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.463333   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:48.465876   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.466192   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.466219   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.466308   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.466805   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.466986   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.467096   22606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:38:48.467143   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.467203   22606 ssh_runner.go:195] Run: cat /version.json
	I0719 14:38:48.467228   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.469675   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.469826   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.470059   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.470086   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.470216   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.470221   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.470249   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.470419   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.470420   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.470601   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.470652   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.470757   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.470818   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:48.470868   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:48.567803   22606 ssh_runner.go:195] Run: systemctl --version
	I0719 14:38:48.574111   22606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:38:48.740377   22606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:38:48.746159   22606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:38:48.746225   22606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:38:48.762844   22606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:38:48.762870   22606 start.go:495] detecting cgroup driver to use...
	I0719 14:38:48.762932   22606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:38:48.778652   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:38:48.791736   22606 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:38:48.791783   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:38:48.804235   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:38:48.817135   22606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:38:48.926826   22606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:38:49.082076   22606 docker.go:233] disabling docker service ...
	I0719 14:38:49.082147   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:38:49.096477   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:38:49.110382   22606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:38:49.224555   22606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:38:49.345654   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:38:49.359204   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:38:49.378664   22606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:38:49.378741   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.389179   22606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:38:49.389249   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.399339   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.409418   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.419395   22606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:38:49.430021   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.440058   22606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.457072   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.467171   22606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:38:49.476795   22606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:38:49.476855   22606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:38:49.489479   22606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:38:49.498837   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:38:49.634942   22606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:38:49.771916   22606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:38:49.772021   22606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:38:49.776803   22606 start.go:563] Will wait 60s for crictl version
	I0719 14:38:49.776866   22606 ssh_runner.go:195] Run: which crictl
	I0719 14:38:49.780613   22606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:38:49.819994   22606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:38:49.820071   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:38:49.847398   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:38:49.877142   22606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:38:49.878338   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:49.880976   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:49.881292   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:49.881322   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:49.881561   22606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:38:49.886198   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:38:49.899497   22606 kubeadm.go:883] updating cluster {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 14:38:49.899616   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:38:49.899660   22606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:38:49.932339   22606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 14:38:49.932403   22606 ssh_runner.go:195] Run: which lz4
	I0719 14:38:49.936559   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 14:38:49.936644   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 14:38:49.940961   22606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 14:38:49.940990   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 14:38:51.358477   22606 crio.go:462] duration metric: took 1.421860886s to copy over tarball
	I0719 14:38:51.358571   22606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 14:38:53.498960   22606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.140358762s)
	I0719 14:38:53.498993   22606 crio.go:469] duration metric: took 2.140487816s to extract the tarball
	I0719 14:38:53.499003   22606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 14:38:53.537877   22606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:38:53.584148   22606 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:38:53.584172   22606 cache_images.go:84] Images are preloaded, skipping loading
	I0719 14:38:53.584180   22606 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.30.3 crio true true} ...
	I0719 14:38:53.584270   22606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:38:53.584333   22606 ssh_runner.go:195] Run: crio config
	I0719 14:38:53.633383   22606 cni.go:84] Creating CNI manager for ""
	I0719 14:38:53.633405   22606 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 14:38:53.633416   22606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 14:38:53.633445   22606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-999305 NodeName:ha-999305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 14:38:53.633631   22606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-999305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 14:38:53.633664   22606 kube-vip.go:115] generating kube-vip config ...
	I0719 14:38:53.633715   22606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:38:53.652624   22606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:38:53.652727   22606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:38:53.652783   22606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:38:53.661917   22606 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 14:38:53.661966   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 14:38:53.671190   22606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 14:38:53.687918   22606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:38:53.704052   22606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 14:38:53.719908   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 14:38:53.736366   22606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:38:53.740336   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:38:53.751786   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:38:53.867207   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:38:53.883522   22606 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.240
	I0719 14:38:53.883542   22606 certs.go:194] generating shared ca certs ...
	I0719 14:38:53.883556   22606 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:53.883721   22606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:38:53.883785   22606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:38:53.883799   22606 certs.go:256] generating profile certs ...
	I0719 14:38:53.883856   22606 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:38:53.883874   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt with IP's: []
	I0719 14:38:53.979360   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt ...
	I0719 14:38:53.979383   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt: {Name:mkf392f6ff96dcc81bc3397b7b50c1b32ca916bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:53.979549   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key ...
	I0719 14:38:53.979565   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key: {Name:mk9acb9a9e075ab14413f6b865c2de54fa24f9bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:53.979662   22606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283
	I0719 14:38:53.979678   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.254]
	I0719 14:38:54.074807   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283 ...
	I0719 14:38:54.074835   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283: {Name:mkffb203a8ae205ca72ec4f55d228de23ee28a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.075023   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283 ...
	I0719 14:38:54.075043   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283: {Name:mkfeac060f4d29cac912c99484ff2e43f59647a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.075136   22606 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:38:54.075240   22606 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:38:54.075312   22606 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:38:54.075333   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt with IP's: []
	I0719 14:38:54.254701   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt ...
	I0719 14:38:54.254728   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt: {Name:mkf836da894897ca036860c077d099e64d3f6625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.254892   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key ...
	I0719 14:38:54.254906   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key: {Name:mk184db1c4e6cd1691efdc781b94dc81c19a79ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.255009   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:38:54.255034   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:38:54.255052   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:38:54.255067   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:38:54.255080   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:38:54.255095   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:38:54.255111   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:38:54.255129   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:38:54.255212   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:38:54.255263   22606 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:38:54.255273   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:38:54.255306   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:38:54.255336   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:38:54.255365   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:38:54.255418   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:38:54.255453   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.255471   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.255489   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.256077   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:38:54.282002   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:38:54.307677   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:38:54.331324   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:38:54.357398   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 14:38:54.384206   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 14:38:54.409121   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:38:54.434492   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:38:54.459718   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:38:54.482705   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:38:54.507249   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:38:54.531717   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 14:38:54.549411   22606 ssh_runner.go:195] Run: openssl version
	I0719 14:38:54.555580   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:38:54.569012   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.573835   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.573890   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.580256   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:38:54.592664   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:38:54.605015   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.609559   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.609611   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.615522   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:38:54.631644   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:38:54.657012   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.663750   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.663811   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.671470   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:38:54.686203   22606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:38:54.693263   22606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:38:54.693307   22606 kubeadm.go:392] StartCluster: {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:38:54.693381   22606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 14:38:54.693419   22606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 14:38:54.733210   22606 cri.go:89] found id: ""
	I0719 14:38:54.733278   22606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 14:38:54.743778   22606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 14:38:54.754335   22606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 14:38:54.764986   22606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 14:38:54.765014   22606 kubeadm.go:157] found existing configuration files:
	
	I0719 14:38:54.765059   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 14:38:54.774137   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 14:38:54.774186   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 14:38:54.783474   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 14:38:54.792386   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 14:38:54.792447   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 14:38:54.802047   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 14:38:54.811883   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 14:38:54.811942   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 14:38:54.821861   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 14:38:54.831769   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 14:38:54.831831   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 14:38:54.841240   22606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 14:38:54.956454   22606 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 14:38:54.956546   22606 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 14:38:55.082082   22606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 14:38:55.082228   22606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 14:38:55.082370   22606 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 14:38:55.291658   22606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 14:38:55.510421   22606 out.go:204]   - Generating certificates and keys ...
	I0719 14:38:55.510535   22606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 14:38:55.510638   22606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 14:38:55.510750   22606 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 14:38:55.592169   22606 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 14:38:55.737981   22606 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 14:38:55.819674   22606 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 14:38:56.000594   22606 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 14:38:56.000999   22606 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-999305 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0719 14:38:56.093074   22606 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 14:38:56.093209   22606 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-999305 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0719 14:38:56.250361   22606 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 14:38:56.567810   22606 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 14:38:56.854088   22606 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 14:38:56.854393   22606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 14:38:56.968705   22606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 14:38:57.113690   22606 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 14:38:57.231733   22606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 14:38:57.346496   22606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 14:38:57.631011   22606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 14:38:57.631462   22606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 14:38:57.633930   22606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 14:38:57.636047   22606 out.go:204]   - Booting up control plane ...
	I0719 14:38:57.636156   22606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 14:38:57.636251   22606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 14:38:57.636353   22606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 14:38:57.650374   22606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 14:38:57.651176   22606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 14:38:57.651218   22606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 14:38:57.778939   22606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 14:38:57.779040   22606 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 14:38:58.279844   22606 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.402308ms
	I0719 14:38:58.279929   22606 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 14:39:04.334957   22606 kubeadm.go:310] [api-check] The API server is healthy after 6.055657626s
	I0719 14:39:04.346343   22606 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 14:39:04.366875   22606 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 14:39:04.897879   22606 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 14:39:04.898112   22606 kubeadm.go:310] [mark-control-plane] Marking the node ha-999305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 14:39:04.911099   22606 kubeadm.go:310] [bootstrap-token] Using token: y3wvba.2pi3h6tz5c5qfy1e
	I0719 14:39:04.912495   22606 out.go:204]   - Configuring RBAC rules ...
	I0719 14:39:04.912635   22606 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 14:39:04.923852   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 14:39:04.931874   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 14:39:04.935251   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 14:39:04.938428   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 14:39:04.942392   22606 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 14:39:04.957243   22606 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 14:39:05.220162   22606 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 14:39:05.738653   22606 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 14:39:05.738674   22606 kubeadm.go:310] 
	I0719 14:39:05.738726   22606 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 14:39:05.738733   22606 kubeadm.go:310] 
	I0719 14:39:05.738810   22606 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 14:39:05.738820   22606 kubeadm.go:310] 
	I0719 14:39:05.738856   22606 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 14:39:05.738932   22606 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 14:39:05.738977   22606 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 14:39:05.738982   22606 kubeadm.go:310] 
	I0719 14:39:05.739071   22606 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 14:39:05.739095   22606 kubeadm.go:310] 
	I0719 14:39:05.739170   22606 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 14:39:05.739182   22606 kubeadm.go:310] 
	I0719 14:39:05.739259   22606 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 14:39:05.739365   22606 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 14:39:05.739465   22606 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 14:39:05.739477   22606 kubeadm.go:310] 
	I0719 14:39:05.739591   22606 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 14:39:05.739696   22606 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 14:39:05.739717   22606 kubeadm.go:310] 
	I0719 14:39:05.739845   22606 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y3wvba.2pi3h6tz5c5qfy1e \
	I0719 14:39:05.739950   22606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 14:39:05.739969   22606 kubeadm.go:310] 	--control-plane 
	I0719 14:39:05.739974   22606 kubeadm.go:310] 
	I0719 14:39:05.740037   22606 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 14:39:05.740043   22606 kubeadm.go:310] 
	I0719 14:39:05.740117   22606 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y3wvba.2pi3h6tz5c5qfy1e \
	I0719 14:39:05.740195   22606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 14:39:05.740807   22606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 14:39:05.740847   22606 cni.go:84] Creating CNI manager for ""
	I0719 14:39:05.740861   22606 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 14:39:05.742724   22606 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 14:39:05.743994   22606 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 14:39:05.749373   22606 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 14:39:05.749391   22606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 14:39:05.767344   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 14:39:06.143874   22606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 14:39:06.143951   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:06.143964   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-999305 minikube.k8s.io/updated_at=2024_07_19T14_39_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=ha-999305 minikube.k8s.io/primary=true
	I0719 14:39:06.276600   22606 ops.go:34] apiserver oom_adj: -16
	I0719 14:39:06.276768   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:06.777050   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:07.277195   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:07.777067   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:08.277603   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:08.777781   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:09.277109   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:09.777094   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:10.276907   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:10.777577   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:11.276979   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:11.776980   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:12.277746   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:12.777823   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:13.276898   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:13.777065   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:14.276791   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:14.777109   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:15.277778   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:15.777382   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:16.277691   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:16.777000   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:17.277794   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:17.371835   22606 kubeadm.go:1113] duration metric: took 11.227943395s to wait for elevateKubeSystemPrivileges
	I0719 14:39:17.371869   22606 kubeadm.go:394] duration metric: took 22.678563939s to StartCluster
	I0719 14:39:17.371889   22606 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:17.371962   22606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:39:17.372666   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:17.372913   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 14:39:17.372932   22606 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 14:39:17.372998   22606 addons.go:69] Setting storage-provisioner=true in profile "ha-999305"
	I0719 14:39:17.373032   22606 addons.go:69] Setting default-storageclass=true in profile "ha-999305"
	I0719 14:39:17.373101   22606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-999305"
	I0719 14:39:17.373142   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:17.372911   22606 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:39:17.373192   22606 start.go:241] waiting for startup goroutines ...
	I0719 14:39:17.373021   22606 addons.go:234] Setting addon storage-provisioner=true in "ha-999305"
	I0719 14:39:17.373232   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:39:17.373573   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.373600   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.373621   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.373629   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.388259   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0719 14:39:17.388444   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0719 14:39:17.388729   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.388915   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.389279   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.389303   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.389429   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.389448   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.389622   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.389770   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.389795   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:17.390338   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.390377   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.392008   22606 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:39:17.392239   22606 kapi.go:59] client config for ha-999305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 14:39:17.392683   22606 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 14:39:17.392821   22606 addons.go:234] Setting addon default-storageclass=true in "ha-999305"
	I0719 14:39:17.392850   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:39:17.393106   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.393138   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.404416   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0719 14:39:17.404861   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.405327   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.405352   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.405638   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.405811   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:17.407397   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0719 14:39:17.407444   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:39:17.407768   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.408111   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.408128   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.408433   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.408857   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.408892   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.409515   22606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 14:39:17.410920   22606 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:39:17.410939   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 14:39:17.410962   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:39:17.413815   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.414268   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:39:17.414295   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.414363   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:39:17.414522   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:39:17.414665   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:39:17.414859   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:39:17.424410   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0719 14:39:17.424750   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.425223   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.425243   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.425547   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.425758   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:17.427230   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:39:17.427415   22606 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 14:39:17.427439   22606 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 14:39:17.427457   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:39:17.429845   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.430164   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:39:17.430182   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.430375   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:39:17.430498   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:39:17.430657   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:39:17.430767   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:39:17.473439   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 14:39:17.556123   22606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:39:17.581965   22606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 14:39:17.880786   22606 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 14:39:18.184337   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184361   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.184533   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184553   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.184706   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.184747   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.184783   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.184808   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.184821   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184823   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.184829   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.184833   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.184843   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184852   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.185020   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.185041   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.185131   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.185149   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.185161   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.185256   22606 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 14:39:18.185266   22606 round_trippers.go:469] Request Headers:
	I0719 14:39:18.185276   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:39:18.185282   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:39:18.199781   22606 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0719 14:39:18.200571   22606 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 14:39:18.200586   22606 round_trippers.go:469] Request Headers:
	I0719 14:39:18.200616   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:39:18.200625   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:39:18.200628   22606 round_trippers.go:473]     Content-Type: application/json
	I0719 14:39:18.217314   22606 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 14:39:18.217484   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.217501   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.217806   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.217825   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.217830   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.219393   22606 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 14:39:18.220509   22606 addons.go:510] duration metric: took 847.580492ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 14:39:18.220541   22606 start.go:246] waiting for cluster config update ...
	I0719 14:39:18.220556   22606 start.go:255] writing updated cluster config ...
	I0719 14:39:18.222000   22606 out.go:177] 
	I0719 14:39:18.223231   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:18.223309   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:39:18.224712   22606 out.go:177] * Starting "ha-999305-m02" control-plane node in "ha-999305" cluster
	I0719 14:39:18.225863   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:39:18.225891   22606 cache.go:56] Caching tarball of preloaded images
	I0719 14:39:18.226007   22606 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:39:18.226023   22606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:39:18.226115   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:39:18.226373   22606 start.go:360] acquireMachinesLock for ha-999305-m02: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:39:18.226430   22606 start.go:364] duration metric: took 34.94µs to acquireMachinesLock for "ha-999305-m02"
	I0719 14:39:18.226452   22606 start.go:93] Provisioning new machine with config: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:39:18.226553   22606 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0719 14:39:18.228171   22606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 14:39:18.228260   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:18.228302   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:18.242856   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0719 14:39:18.243275   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:18.243721   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:18.243742   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:18.244015   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:18.244196   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:18.244345   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:18.244474   22606 start.go:159] libmachine.API.Create for "ha-999305" (driver="kvm2")
	I0719 14:39:18.244502   22606 client.go:168] LocalClient.Create starting
	I0719 14:39:18.244537   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:39:18.244578   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:39:18.244599   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:39:18.244670   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:39:18.244695   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:39:18.244709   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:39:18.244734   22606 main.go:141] libmachine: Running pre-create checks...
	I0719 14:39:18.244745   22606 main.go:141] libmachine: (ha-999305-m02) Calling .PreCreateCheck
	I0719 14:39:18.244894   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetConfigRaw
	I0719 14:39:18.245213   22606 main.go:141] libmachine: Creating machine...
	I0719 14:39:18.245226   22606 main.go:141] libmachine: (ha-999305-m02) Calling .Create
	I0719 14:39:18.245363   22606 main.go:141] libmachine: (ha-999305-m02) Creating KVM machine...
	I0719 14:39:18.246682   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found existing default KVM network
	I0719 14:39:18.246804   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found existing private KVM network mk-ha-999305
	I0719 14:39:18.246927   22606 main.go:141] libmachine: (ha-999305-m02) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02 ...
	I0719 14:39:18.246964   22606 main.go:141] libmachine: (ha-999305-m02) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:39:18.246981   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.246903   22975 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:39:18.247087   22606 main.go:141] libmachine: (ha-999305-m02) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:39:18.462228   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.462084   22975 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa...
	I0719 14:39:18.582334   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.582194   22975 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/ha-999305-m02.rawdisk...
	I0719 14:39:18.582373   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Writing magic tar header
	I0719 14:39:18.582387   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Writing SSH key tar header
	I0719 14:39:18.582403   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.582368   22975 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02 ...
	I0719 14:39:18.582536   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02
	I0719 14:39:18.582564   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:39:18.582577   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02 (perms=drwx------)
	I0719 14:39:18.582594   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:39:18.582620   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:39:18.582653   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:39:18.582672   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:39:18.582689   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:39:18.582704   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:39:18.582717   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:39:18.582731   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:39:18.582745   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:39:18.582757   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home
	I0719 14:39:18.582774   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Skipping /home - not owner
	I0719 14:39:18.582789   22606 main.go:141] libmachine: (ha-999305-m02) Creating domain...
	I0719 14:39:18.583621   22606 main.go:141] libmachine: (ha-999305-m02) define libvirt domain using xml: 
	I0719 14:39:18.583642   22606 main.go:141] libmachine: (ha-999305-m02) <domain type='kvm'>
	I0719 14:39:18.583657   22606 main.go:141] libmachine: (ha-999305-m02)   <name>ha-999305-m02</name>
	I0719 14:39:18.583665   22606 main.go:141] libmachine: (ha-999305-m02)   <memory unit='MiB'>2200</memory>
	I0719 14:39:18.583682   22606 main.go:141] libmachine: (ha-999305-m02)   <vcpu>2</vcpu>
	I0719 14:39:18.583688   22606 main.go:141] libmachine: (ha-999305-m02)   <features>
	I0719 14:39:18.583699   22606 main.go:141] libmachine: (ha-999305-m02)     <acpi/>
	I0719 14:39:18.583704   22606 main.go:141] libmachine: (ha-999305-m02)     <apic/>
	I0719 14:39:18.583711   22606 main.go:141] libmachine: (ha-999305-m02)     <pae/>
	I0719 14:39:18.583715   22606 main.go:141] libmachine: (ha-999305-m02)     
	I0719 14:39:18.583730   22606 main.go:141] libmachine: (ha-999305-m02)   </features>
	I0719 14:39:18.583738   22606 main.go:141] libmachine: (ha-999305-m02)   <cpu mode='host-passthrough'>
	I0719 14:39:18.583759   22606 main.go:141] libmachine: (ha-999305-m02)   
	I0719 14:39:18.583785   22606 main.go:141] libmachine: (ha-999305-m02)   </cpu>
	I0719 14:39:18.583795   22606 main.go:141] libmachine: (ha-999305-m02)   <os>
	I0719 14:39:18.583808   22606 main.go:141] libmachine: (ha-999305-m02)     <type>hvm</type>
	I0719 14:39:18.583822   22606 main.go:141] libmachine: (ha-999305-m02)     <boot dev='cdrom'/>
	I0719 14:39:18.583837   22606 main.go:141] libmachine: (ha-999305-m02)     <boot dev='hd'/>
	I0719 14:39:18.583850   22606 main.go:141] libmachine: (ha-999305-m02)     <bootmenu enable='no'/>
	I0719 14:39:18.583862   22606 main.go:141] libmachine: (ha-999305-m02)   </os>
	I0719 14:39:18.583873   22606 main.go:141] libmachine: (ha-999305-m02)   <devices>
	I0719 14:39:18.583886   22606 main.go:141] libmachine: (ha-999305-m02)     <disk type='file' device='cdrom'>
	I0719 14:39:18.583902   22606 main.go:141] libmachine: (ha-999305-m02)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/boot2docker.iso'/>
	I0719 14:39:18.583914   22606 main.go:141] libmachine: (ha-999305-m02)       <target dev='hdc' bus='scsi'/>
	I0719 14:39:18.583941   22606 main.go:141] libmachine: (ha-999305-m02)       <readonly/>
	I0719 14:39:18.583958   22606 main.go:141] libmachine: (ha-999305-m02)     </disk>
	I0719 14:39:18.583984   22606 main.go:141] libmachine: (ha-999305-m02)     <disk type='file' device='disk'>
	I0719 14:39:18.584003   22606 main.go:141] libmachine: (ha-999305-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:39:18.584027   22606 main.go:141] libmachine: (ha-999305-m02)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/ha-999305-m02.rawdisk'/>
	I0719 14:39:18.584036   22606 main.go:141] libmachine: (ha-999305-m02)       <target dev='hda' bus='virtio'/>
	I0719 14:39:18.584048   22606 main.go:141] libmachine: (ha-999305-m02)     </disk>
	I0719 14:39:18.584059   22606 main.go:141] libmachine: (ha-999305-m02)     <interface type='network'>
	I0719 14:39:18.584070   22606 main.go:141] libmachine: (ha-999305-m02)       <source network='mk-ha-999305'/>
	I0719 14:39:18.584083   22606 main.go:141] libmachine: (ha-999305-m02)       <model type='virtio'/>
	I0719 14:39:18.584093   22606 main.go:141] libmachine: (ha-999305-m02)     </interface>
	I0719 14:39:18.584104   22606 main.go:141] libmachine: (ha-999305-m02)     <interface type='network'>
	I0719 14:39:18.584117   22606 main.go:141] libmachine: (ha-999305-m02)       <source network='default'/>
	I0719 14:39:18.584128   22606 main.go:141] libmachine: (ha-999305-m02)       <model type='virtio'/>
	I0719 14:39:18.584140   22606 main.go:141] libmachine: (ha-999305-m02)     </interface>
	I0719 14:39:18.584150   22606 main.go:141] libmachine: (ha-999305-m02)     <serial type='pty'>
	I0719 14:39:18.584161   22606 main.go:141] libmachine: (ha-999305-m02)       <target port='0'/>
	I0719 14:39:18.584171   22606 main.go:141] libmachine: (ha-999305-m02)     </serial>
	I0719 14:39:18.584184   22606 main.go:141] libmachine: (ha-999305-m02)     <console type='pty'>
	I0719 14:39:18.584195   22606 main.go:141] libmachine: (ha-999305-m02)       <target type='serial' port='0'/>
	I0719 14:39:18.584205   22606 main.go:141] libmachine: (ha-999305-m02)     </console>
	I0719 14:39:18.584225   22606 main.go:141] libmachine: (ha-999305-m02)     <rng model='virtio'>
	I0719 14:39:18.584239   22606 main.go:141] libmachine: (ha-999305-m02)       <backend model='random'>/dev/random</backend>
	I0719 14:39:18.584250   22606 main.go:141] libmachine: (ha-999305-m02)     </rng>
	I0719 14:39:18.584261   22606 main.go:141] libmachine: (ha-999305-m02)     
	I0719 14:39:18.584268   22606 main.go:141] libmachine: (ha-999305-m02)     
	I0719 14:39:18.584278   22606 main.go:141] libmachine: (ha-999305-m02)   </devices>
	I0719 14:39:18.584292   22606 main.go:141] libmachine: (ha-999305-m02) </domain>
	I0719 14:39:18.584306   22606 main.go:141] libmachine: (ha-999305-m02) 
	I0719 14:39:18.590654   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:73:3f:f4 in network default
	I0719 14:39:18.591159   22606 main.go:141] libmachine: (ha-999305-m02) Ensuring networks are active...
	I0719 14:39:18.591192   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:18.591867   22606 main.go:141] libmachine: (ha-999305-m02) Ensuring network default is active
	I0719 14:39:18.592066   22606 main.go:141] libmachine: (ha-999305-m02) Ensuring network mk-ha-999305 is active
	I0719 14:39:18.592371   22606 main.go:141] libmachine: (ha-999305-m02) Getting domain xml...
	I0719 14:39:18.593040   22606 main.go:141] libmachine: (ha-999305-m02) Creating domain...
	I0719 14:39:19.829028   22606 main.go:141] libmachine: (ha-999305-m02) Waiting to get IP...
	I0719 14:39:19.829882   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:19.830355   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:19.830379   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:19.830335   22975 retry.go:31] will retry after 232.698136ms: waiting for machine to come up
	I0719 14:39:20.064451   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:20.064879   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:20.064906   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:20.064838   22975 retry.go:31] will retry after 300.649663ms: waiting for machine to come up
	I0719 14:39:20.367477   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:20.367880   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:20.367900   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:20.367837   22975 retry.go:31] will retry after 308.173928ms: waiting for machine to come up
	I0719 14:39:20.677371   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:20.677828   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:20.677883   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:20.677813   22975 retry.go:31] will retry after 527.141479ms: waiting for machine to come up
	I0719 14:39:21.206519   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:21.207014   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:21.207044   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:21.206975   22975 retry.go:31] will retry after 527.998334ms: waiting for machine to come up
	I0719 14:39:21.736776   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:21.737213   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:21.737243   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:21.737166   22975 retry.go:31] will retry after 825.77254ms: waiting for machine to come up
	I0719 14:39:22.564616   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:22.565026   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:22.565064   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:22.565001   22975 retry.go:31] will retry after 909.482551ms: waiting for machine to come up
	I0719 14:39:23.475812   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:23.476310   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:23.476335   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:23.476264   22975 retry.go:31] will retry after 1.114340427s: waiting for machine to come up
	I0719 14:39:24.592057   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:24.592483   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:24.592513   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:24.592436   22975 retry.go:31] will retry after 1.413057812s: waiting for machine to come up
	I0719 14:39:26.007232   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:26.007705   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:26.007731   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:26.007654   22975 retry.go:31] will retry after 1.543069671s: waiting for machine to come up
	I0719 14:39:27.554873   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:27.555323   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:27.555346   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:27.555276   22975 retry.go:31] will retry after 2.033378244s: waiting for machine to come up
	I0719 14:39:29.589995   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:29.590403   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:29.590424   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:29.590384   22975 retry.go:31] will retry after 2.879562841s: waiting for machine to come up
	I0719 14:39:32.472168   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:32.472585   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:32.472608   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:32.472542   22975 retry.go:31] will retry after 4.312500232s: waiting for machine to come up
	I0719 14:39:36.787365   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:36.787784   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:36.787811   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:36.787737   22975 retry.go:31] will retry after 3.923983309s: waiting for machine to come up
	I0719 14:39:40.715144   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.715607   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has current primary IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.715628   22606 main.go:141] libmachine: (ha-999305-m02) Found IP for machine: 192.168.39.163
	I0719 14:39:40.715642   22606 main.go:141] libmachine: (ha-999305-m02) Reserving static IP address...
	I0719 14:39:40.716060   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find host DHCP lease matching {name: "ha-999305-m02", mac: "52:54:00:8f:f6:ba", ip: "192.168.39.163"} in network mk-ha-999305
	I0719 14:39:40.788615   22606 main.go:141] libmachine: (ha-999305-m02) Reserved static IP address: 192.168.39.163
	I0719 14:39:40.788635   22606 main.go:141] libmachine: (ha-999305-m02) Waiting for SSH to be available...
	I0719 14:39:40.788681   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Getting to WaitForSSH function...
	I0719 14:39:40.791139   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.791475   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:40.791512   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.791680   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Using SSH client type: external
	I0719 14:39:40.791704   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa (-rw-------)
	I0719 14:39:40.791741   22606 main.go:141] libmachine: (ha-999305-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:39:40.791761   22606 main.go:141] libmachine: (ha-999305-m02) DBG | About to run SSH command:
	I0719 14:39:40.791777   22606 main.go:141] libmachine: (ha-999305-m02) DBG | exit 0
	I0719 14:39:40.918300   22606 main.go:141] libmachine: (ha-999305-m02) DBG | SSH cmd err, output: <nil>: 
	I0719 14:39:40.918591   22606 main.go:141] libmachine: (ha-999305-m02) KVM machine creation complete!
	I0719 14:39:40.918873   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetConfigRaw
	I0719 14:39:40.919396   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:40.919576   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:40.919704   22606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:39:40.919716   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:39:40.920802   22606 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:39:40.920815   22606 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:39:40.920820   22606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:39:40.920826   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:40.923013   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.923413   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:40.923439   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.923580   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:40.923743   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:40.923927   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:40.924068   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:40.924216   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:40.924432   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:40.924443   22606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:39:41.029539   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:39:41.029567   22606 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:39:41.029575   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.032271   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.032675   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.032707   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.032819   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.033018   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.033183   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.033323   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.033523   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.033741   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.033753   22606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:39:41.143037   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:39:41.143114   22606 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:39:41.143122   22606 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:39:41.143129   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:41.143364   22606 buildroot.go:166] provisioning hostname "ha-999305-m02"
	I0719 14:39:41.143395   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:41.143602   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.146274   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.146679   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.146706   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.146828   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.147002   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.147152   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.147380   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.147593   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.147762   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.147774   22606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305-m02 && echo "ha-999305-m02" | sudo tee /etc/hostname
	I0719 14:39:41.271088   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305-m02
	
	I0719 14:39:41.271113   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.273393   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.273735   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.273763   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.273881   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.274075   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.274262   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.274414   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.274582   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.274803   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.274825   22606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:39:41.392552   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:39:41.392580   22606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:39:41.392614   22606 buildroot.go:174] setting up certificates
	I0719 14:39:41.392628   22606 provision.go:84] configureAuth start
	I0719 14:39:41.392644   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:41.392952   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:41.395461   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.395808   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.395828   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.396000   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.398076   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.398391   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.398419   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.398642   22606 provision.go:143] copyHostCerts
	I0719 14:39:41.398682   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:39:41.398714   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:39:41.398727   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:39:41.398801   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:39:41.398901   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:39:41.398926   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:39:41.398933   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:39:41.398975   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:39:41.399049   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:39:41.399073   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:39:41.399081   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:39:41.399114   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:39:41.399226   22606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305-m02 san=[127.0.0.1 192.168.39.163 ha-999305-m02 localhost minikube]
	I0719 14:39:41.663891   22606 provision.go:177] copyRemoteCerts
	I0719 14:39:41.663946   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:39:41.663969   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.667045   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.667368   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.667393   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.667560   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.667874   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.668026   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.668146   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:41.752370   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:39:41.752452   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:39:41.777595   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:39:41.777667   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 14:39:41.802078   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:39:41.802148   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 14:39:41.826484   22606 provision.go:87] duration metric: took 433.840369ms to configureAuth
	I0719 14:39:41.826518   22606 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:39:41.826762   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:41.826859   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.829745   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.830121   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.830145   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.830403   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.830600   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.830761   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.830889   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.831041   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.831244   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.831267   22606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:39:42.124350   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:39:42.124380   22606 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:39:42.124390   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetURL
	I0719 14:39:42.125797   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Using libvirt version 6000000
	I0719 14:39:42.128127   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.128492   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.128525   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.128714   22606 main.go:141] libmachine: Docker is up and running!
	I0719 14:39:42.128728   22606 main.go:141] libmachine: Reticulating splines...
	I0719 14:39:42.128735   22606 client.go:171] duration metric: took 23.884223467s to LocalClient.Create
	I0719 14:39:42.128765   22606 start.go:167] duration metric: took 23.884290639s to libmachine.API.Create "ha-999305"
	I0719 14:39:42.128777   22606 start.go:293] postStartSetup for "ha-999305-m02" (driver="kvm2")
	I0719 14:39:42.128793   22606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:39:42.128820   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.129042   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:39:42.129067   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:42.131400   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.131724   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.131748   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.131888   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.132046   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.132211   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.132317   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:42.216466   22606 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:39:42.220784   22606 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:39:42.220805   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:39:42.220876   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:39:42.220973   22606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:39:42.220986   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:39:42.221067   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:39:42.230716   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:39:42.255478   22606 start.go:296] duration metric: took 126.686327ms for postStartSetup
	I0719 14:39:42.255536   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetConfigRaw
	I0719 14:39:42.256145   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:42.258614   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.258911   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.258939   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.259138   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:39:42.259341   22606 start.go:128] duration metric: took 24.032774788s to createHost
	I0719 14:39:42.259366   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:42.261488   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.261759   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.261787   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.261944   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.262103   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.262254   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.262482   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.262665   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:42.262832   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:42.262842   22606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:39:42.371100   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399982.329282609
	
	I0719 14:39:42.371123   22606 fix.go:216] guest clock: 1721399982.329282609
	I0719 14:39:42.371130   22606 fix.go:229] Guest: 2024-07-19 14:39:42.329282609 +0000 UTC Remote: 2024-07-19 14:39:42.25935438 +0000 UTC m=+74.526865486 (delta=69.928229ms)
	I0719 14:39:42.371144   22606 fix.go:200] guest clock delta is within tolerance: 69.928229ms
	I0719 14:39:42.371149   22606 start.go:83] releasing machines lock for "ha-999305-m02", held for 24.144708393s
	I0719 14:39:42.371165   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.371446   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:42.373953   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.374337   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.374365   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.376592   22606 out.go:177] * Found network options:
	I0719 14:39:42.377929   22606 out.go:177]   - NO_PROXY=192.168.39.240
	W0719 14:39:42.379182   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:39:42.379207   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.379764   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.379951   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.380040   22606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:39:42.380080   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	W0719 14:39:42.380168   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:39:42.380250   22606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:39:42.380271   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:42.382746   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383077   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.383105   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383124   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383246   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.383403   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.383546   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.383567   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.383594   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383704   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:42.383801   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.383945   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.384056   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.384149   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:42.617964   22606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:39:42.624485   22606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:39:42.624540   22606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:39:42.641218   22606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:39:42.641245   22606 start.go:495] detecting cgroup driver to use...
	I0719 14:39:42.641305   22606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:39:42.657487   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:39:42.671672   22606 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:39:42.671723   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:39:42.685181   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:39:42.698537   22606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:39:42.807279   22606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:39:42.943615   22606 docker.go:233] disabling docker service ...
	I0719 14:39:42.943675   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:39:42.958350   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:39:42.971339   22606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:39:43.105839   22606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:39:43.223091   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:39:43.236680   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:39:43.254975   22606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:39:43.255040   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.266905   22606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:39:43.266971   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.279094   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.289791   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.302548   22606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:39:43.314907   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.325554   22606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.344159   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.354516   22606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:39:43.363895   22606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:39:43.363948   22606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:39:43.377079   22606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:39:43.386342   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:39:43.492892   22606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:39:43.641929   22606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:39:43.642003   22606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:39:43.646608   22606 start.go:563] Will wait 60s for crictl version
	I0719 14:39:43.646664   22606 ssh_runner.go:195] Run: which crictl
	I0719 14:39:43.650279   22606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:39:43.688012   22606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:39:43.688095   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:39:43.716291   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:39:43.747334   22606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:39:43.748971   22606 out.go:177]   - env NO_PROXY=192.168.39.240
	I0719 14:39:43.750208   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:43.752887   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:43.753298   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:43.753325   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:43.753544   22606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:39:43.758044   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:39:43.771952   22606 mustload.go:65] Loading cluster: ha-999305
	I0719 14:39:43.772130   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:43.772368   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:43.772394   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:43.786872   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0719 14:39:43.787335   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:43.787813   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:43.787831   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:43.788110   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:43.788295   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:43.789897   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:39:43.790172   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:43.790209   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:43.804706   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0719 14:39:43.805093   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:43.805583   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:43.805608   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:43.805947   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:43.806137   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:39:43.806322   22606 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.163
	I0719 14:39:43.806332   22606 certs.go:194] generating shared ca certs ...
	I0719 14:39:43.806344   22606 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:43.806462   22606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:39:43.806495   22606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:39:43.806503   22606 certs.go:256] generating profile certs ...
	I0719 14:39:43.806564   22606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:39:43.806587   22606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b
	I0719 14:39:43.806605   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.163 192.168.39.254]
	I0719 14:39:43.984627   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b ...
	I0719 14:39:43.984656   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b: {Name:mk2a0b1ad7bc80f20dada6c6b7ae3f4c0d7ba80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:43.984811   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b ...
	I0719 14:39:43.984822   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b: {Name:mk23808245a07f43c7c3d40d12ace7cf9ae36ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:43.984890   22606 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:39:43.985019   22606 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:39:43.985137   22606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:39:43.985151   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:39:43.985163   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:39:43.985176   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:39:43.985188   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:39:43.985200   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:39:43.985213   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:39:43.985225   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:39:43.985236   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:39:43.985281   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:39:43.985307   22606 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:39:43.985317   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:39:43.985343   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:39:43.985364   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:39:43.985384   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:39:43.985418   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:39:43.985444   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:43.985457   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:39:43.985470   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:39:43.985500   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:39:43.988477   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:43.988943   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:39:43.988975   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:43.989097   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:39:43.989285   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:39:43.989438   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:39:43.989564   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:39:44.062643   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 14:39:44.067970   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 14:39:44.079368   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 14:39:44.083652   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 14:39:44.098386   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 14:39:44.102994   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 14:39:44.114292   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 14:39:44.118673   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 14:39:44.129653   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 14:39:44.133668   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 14:39:44.144876   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 14:39:44.150850   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 14:39:44.162619   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:39:44.187791   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:39:44.211834   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:39:44.235247   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:39:44.259337   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 14:39:44.283869   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 14:39:44.308070   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:39:44.331461   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:39:44.354997   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:39:44.379318   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:39:44.403152   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:39:44.427474   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 14:39:44.444233   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 14:39:44.460690   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 14:39:44.477088   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 14:39:44.493773   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 14:39:44.511155   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 14:39:44.528189   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 14:39:44.544925   22606 ssh_runner.go:195] Run: openssl version
	I0719 14:39:44.550673   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:39:44.562009   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:39:44.566717   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:39:44.566785   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:39:44.572524   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:39:44.582943   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:39:44.593097   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:44.597429   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:44.597473   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:44.602955   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:39:44.613681   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:39:44.624201   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:39:44.628353   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:39:44.628396   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:39:44.633697   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:39:44.643716   22606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:39:44.647535   22606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:39:44.647624   22606 kubeadm.go:934] updating node {m02 192.168.39.163 8443 v1.30.3 crio true true} ...
	I0719 14:39:44.647711   22606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:39:44.647744   22606 kube-vip.go:115] generating kube-vip config ...
	I0719 14:39:44.647780   22606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:39:44.664906   22606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:39:44.665032   22606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:39:44.665094   22606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:39:44.674426   22606 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 14:39:44.674477   22606 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 14:39:44.683529   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 14:39:44.683558   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:39:44.683593   22606 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0719 14:39:44.683614   22606 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0719 14:39:44.683625   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:39:44.687965   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 14:39:44.687997   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 14:40:18.526029   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:40:18.526122   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:40:18.531072   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 14:40:18.531099   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 14:40:55.985114   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:40:56.001664   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:40:56.001784   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:40:56.006456   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 14:40:56.006485   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 14:40:56.398539   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 14:40:56.408289   22606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 14:40:56.424633   22606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:40:56.440389   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 14:40:56.456320   22606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:40:56.460249   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:40:56.471499   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:40:56.585388   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:40:56.601533   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:40:56.601886   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:40:56.601928   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:40:56.616473   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0719 14:40:56.616930   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:40:56.617408   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:40:56.617424   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:40:56.617720   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:40:56.617898   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:40:56.618071   22606 start.go:317] joinCluster: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:40:56.618164   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 14:40:56.618185   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:40:56.621208   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:40:56.621621   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:40:56.621653   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:40:56.621819   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:40:56.622005   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:40:56.622158   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:40:56.622332   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:40:56.782875   22606 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:40:56.782916   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t5x36i.g04hbzpy1n0k6w3r --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m02 --control-plane --apiserver-advertise-address=192.168.39.163 --apiserver-bind-port=8443"
	I0719 14:41:19.066465   22606 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t5x36i.g04hbzpy1n0k6w3r --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m02 --control-plane --apiserver-advertise-address=192.168.39.163 --apiserver-bind-port=8443": (22.283523172s)
	I0719 14:41:19.066505   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 14:41:19.639495   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-999305-m02 minikube.k8s.io/updated_at=2024_07_19T14_41_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=ha-999305 minikube.k8s.io/primary=false
	I0719 14:41:19.784944   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-999305-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 14:41:19.897959   22606 start.go:319] duration metric: took 23.279884364s to joinCluster
	I0719 14:41:19.898032   22606 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:41:19.898338   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:41:19.899533   22606 out.go:177] * Verifying Kubernetes components...
	I0719 14:41:19.900743   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:41:20.187486   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:41:20.233898   22606 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:41:20.234135   22606 kapi.go:59] client config for ha-999305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 14:41:20.234217   22606 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0719 14:41:20.234484   22606 node_ready.go:35] waiting up to 6m0s for node "ha-999305-m02" to be "Ready" ...
	I0719 14:41:20.234586   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:20.234597   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:20.234608   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:20.234612   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:20.245032   22606 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 14:41:20.735045   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:20.735065   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:20.735073   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:20.735077   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:20.739760   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:21.235443   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:21.235475   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:21.235482   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:21.235489   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:21.238391   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:21.735486   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:21.735506   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:21.735514   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:21.735519   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:21.738581   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:22.235601   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:22.235623   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:22.235631   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:22.235634   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:22.239096   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:22.239796   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:22.735141   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:22.735167   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:22.735177   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:22.735182   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:22.738655   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:23.235387   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:23.235409   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:23.235421   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:23.235425   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:23.239298   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:23.734970   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:23.734992   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:23.735002   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:23.735007   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:23.738576   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:24.235569   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:24.235594   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:24.235606   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:24.235611   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:24.239367   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:24.240102   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:24.735498   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:24.735517   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:24.735525   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:24.735529   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:24.739607   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:25.235456   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:25.235478   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:25.235486   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:25.235491   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:25.238496   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:25.734676   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:25.734702   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:25.734714   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:25.734720   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:25.737811   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:26.234786   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:26.234812   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:26.234824   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:26.234829   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:26.238309   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:26.734665   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:26.734690   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:26.734699   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:26.734707   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:26.744183   22606 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 14:41:26.745230   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:27.235583   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:27.235604   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:27.235611   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:27.235614   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:27.238654   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:27.734757   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:27.734777   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:27.734784   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:27.734788   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:27.738092   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:28.235017   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:28.235046   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:28.235057   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:28.235065   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:28.238698   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:28.735413   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:28.735437   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:28.735448   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:28.735455   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:28.739308   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:29.235460   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:29.235482   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:29.235489   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:29.235493   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:29.238787   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:29.239611   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:29.734920   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:29.734940   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:29.734947   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:29.734951   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:29.738339   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:30.235322   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:30.235344   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:30.235353   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:30.235357   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:30.239241   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:30.735462   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:30.735486   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:30.735496   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:30.735500   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:30.738712   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:31.234839   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:31.234857   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:31.234865   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:31.234868   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:31.237706   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:31.735364   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:31.735385   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:31.735395   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:31.735400   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:31.738337   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:31.738925   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:32.235153   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:32.235176   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:32.235186   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:32.235192   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:32.238054   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:32.735287   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:32.735312   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:32.735322   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:32.735327   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:32.739030   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:33.235500   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:33.235528   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:33.235540   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:33.235547   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:33.239031   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:33.735435   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:33.735458   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:33.735469   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:33.735475   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:33.738128   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:34.234952   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:34.234975   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:34.234983   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:34.234988   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:34.238593   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:34.239210   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:34.735472   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:34.735490   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:34.735499   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:34.735502   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:34.738766   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:35.235517   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:35.235543   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:35.235556   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:35.235561   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:35.238716   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:35.734746   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:35.734765   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:35.734773   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:35.734777   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:35.738036   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:36.235477   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:36.235502   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:36.235512   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:36.235517   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:36.238926   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:36.239395   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:36.735200   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:36.735222   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:36.735232   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:36.735238   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:36.739765   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:37.234858   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:37.234881   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:37.234892   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:37.234898   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:37.238254   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:37.735460   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:37.735482   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:37.735490   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:37.735494   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:37.739155   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.234873   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:38.234900   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.234913   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.234917   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.238547   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.239036   22606 node_ready.go:49] node "ha-999305-m02" has status "Ready":"True"
	I0719 14:41:38.239054   22606 node_ready.go:38] duration metric: took 18.004552949s for node "ha-999305-m02" to be "Ready" ...
	I0719 14:41:38.239062   22606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:41:38.239118   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:38.239126   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.239132   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.239138   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.244137   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:38.250129   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.250192   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9sxgr
	I0719 14:41:38.250200   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.250207   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.250210   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.253366   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.254466   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.254487   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.254494   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.254498   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.257356   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.258028   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.258049   22606 pod_ready.go:81] duration metric: took 7.899929ms for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.258060   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.258118   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gtwxd
	I0719 14:41:38.258129   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.258138   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.258147   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.261231   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.262263   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.262278   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.262287   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.262291   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.265036   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.265950   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.265968   22606 pod_ready.go:81] duration metric: took 7.899503ms for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.265977   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.266020   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305
	I0719 14:41:38.266027   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.266033   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.266038   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.268403   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.268981   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.268997   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.269004   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.269007   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.271168   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.271583   22606 pod_ready.go:92] pod "etcd-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.271596   22606 pod_ready.go:81] duration metric: took 5.613301ms for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.271604   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.271660   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m02
	I0719 14:41:38.271670   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.271677   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.271681   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.274267   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.274894   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:38.274909   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.274919   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.274926   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.277928   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.278559   22606 pod_ready.go:92] pod "etcd-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.278575   22606 pod_ready.go:81] duration metric: took 6.965386ms for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.278591   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.434872   22606 request.go:629] Waited for 156.22314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:41:38.434943   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:41:38.434950   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.434960   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.434967   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.438021   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.635381   22606 request.go:629] Waited for 196.400511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.635437   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.635444   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.635454   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.635462   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.638941   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.639629   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.639647   22606 pod_ready.go:81] duration metric: took 361.04492ms for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.639656   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.834874   22606 request.go:629] Waited for 195.138261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:41:38.834965   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:41:38.834977   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.834988   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.834997   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.838012   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:39.035005   22606 request.go:629] Waited for 196.286103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.035081   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.035092   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.035108   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.035116   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.039251   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:39.039819   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:39.039837   22606 pod_ready.go:81] duration metric: took 400.173919ms for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.039851   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.235914   22606 request.go:629] Waited for 195.992688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:41:39.236002   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:41:39.236010   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.236021   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.236029   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.239055   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:39.434982   22606 request.go:629] Waited for 195.302459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:39.435071   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:39.435081   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.435094   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.435103   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.438520   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:39.439039   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:39.439062   22606 pod_ready.go:81] duration metric: took 399.203191ms for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.439075   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.635186   22606 request.go:629] Waited for 196.027799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:41:39.635251   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:41:39.635258   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.635269   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.635273   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.638441   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:39.835647   22606 request.go:629] Waited for 196.392371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.835732   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.835741   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.835748   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.835752   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.838529   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:39.839077   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:39.839097   22606 pod_ready.go:81] duration metric: took 400.012031ms for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.839109   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.035136   22606 request.go:629] Waited for 195.963436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:41:40.035199   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:41:40.035205   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.035213   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.035217   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.038436   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:40.235700   22606 request.go:629] Waited for 196.338225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:40.235748   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:40.235753   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.235760   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.235766   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.240036   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:40.240449   22606 pod_ready.go:92] pod "kube-proxy-766sx" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:40.240466   22606 pod_ready.go:81] duration metric: took 401.349631ms for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.240474   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.435698   22606 request.go:629] Waited for 195.163815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:41:40.435796   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:41:40.435807   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.435818   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.435826   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.439801   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:40.634947   22606 request.go:629] Waited for 194.275452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:40.635036   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:40.635047   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.635058   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.635068   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.638020   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:40.638718   22606 pod_ready.go:92] pod "kube-proxy-s2wb7" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:40.638741   22606 pod_ready.go:81] duration metric: took 398.258211ms for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.638753   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.835797   22606 request.go:629] Waited for 196.967578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:41:40.835861   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:41:40.835868   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.835878   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.835898   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.839212   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:41.035378   22606 request.go:629] Waited for 195.341022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:41.035430   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:41.035437   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.035447   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.035458   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.038664   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:41.039195   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:41.039211   22606 pod_ready.go:81] duration metric: took 400.451796ms for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:41.039219   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:41.235499   22606 request.go:629] Waited for 196.192704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:41:41.235566   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:41:41.235576   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.235588   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.235595   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.238457   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:41.435372   22606 request.go:629] Waited for 196.342868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:41.435439   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:41.435446   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.435453   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.435458   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.439187   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:41.439914   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:41.439928   22606 pod_ready.go:81] duration metric: took 400.703094ms for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:41.439938   22606 pod_ready.go:38] duration metric: took 3.200865668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:41:41.439951   22606 api_server.go:52] waiting for apiserver process to appear ...
	I0719 14:41:41.439996   22606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:41:41.456138   22606 api_server.go:72] duration metric: took 21.558072267s to wait for apiserver process to appear ...
	I0719 14:41:41.456162   22606 api_server.go:88] waiting for apiserver healthz status ...
	I0719 14:41:41.456180   22606 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0719 14:41:41.460620   22606 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0719 14:41:41.460681   22606 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0719 14:41:41.460691   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.460702   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.460707   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.461594   22606 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 14:41:41.461708   22606 api_server.go:141] control plane version: v1.30.3
	I0719 14:41:41.461726   22606 api_server.go:131] duration metric: took 5.557821ms to wait for apiserver health ...
	I0719 14:41:41.461734   22606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 14:41:41.635397   22606 request.go:629] Waited for 173.600025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:41.635453   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:41.635468   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.635475   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.635480   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.641142   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:41:41.646068   22606 system_pods.go:59] 17 kube-system pods found
	I0719 14:41:41.646092   22606 system_pods.go:61] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:41:41.646097   22606 system_pods.go:61] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:41:41.646101   22606 system_pods.go:61] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:41:41.646105   22606 system_pods.go:61] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:41:41.646108   22606 system_pods.go:61] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:41:41.646111   22606 system_pods.go:61] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:41:41.646115   22606 system_pods.go:61] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:41:41.646118   22606 system_pods.go:61] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:41:41.646121   22606 system_pods.go:61] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:41:41.646124   22606 system_pods.go:61] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:41:41.646127   22606 system_pods.go:61] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:41:41.646133   22606 system_pods.go:61] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:41:41.646138   22606 system_pods.go:61] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:41:41.646143   22606 system_pods.go:61] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:41:41.646145   22606 system_pods.go:61] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:41:41.646148   22606 system_pods.go:61] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:41:41.646151   22606 system_pods.go:61] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:41:41.646157   22606 system_pods.go:74] duration metric: took 184.414006ms to wait for pod list to return data ...
	I0719 14:41:41.646165   22606 default_sa.go:34] waiting for default service account to be created ...
	I0719 14:41:41.835646   22606 request.go:629] Waited for 189.422487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:41:41.835701   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:41:41.835707   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.835712   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.835716   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.838704   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:41.838951   22606 default_sa.go:45] found service account: "default"
	I0719 14:41:41.838973   22606 default_sa.go:55] duration metric: took 192.800827ms for default service account to be created ...
	I0719 14:41:41.838984   22606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 14:41:42.035198   22606 request.go:629] Waited for 196.150627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:42.035275   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:42.035284   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:42.035292   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:42.035297   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:42.040481   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:41:42.045064   22606 system_pods.go:86] 17 kube-system pods found
	I0719 14:41:42.045086   22606 system_pods.go:89] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:41:42.045091   22606 system_pods.go:89] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:41:42.045095   22606 system_pods.go:89] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:41:42.045099   22606 system_pods.go:89] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:41:42.045103   22606 system_pods.go:89] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:41:42.045108   22606 system_pods.go:89] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:41:42.045114   22606 system_pods.go:89] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:41:42.045119   22606 system_pods.go:89] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:41:42.045129   22606 system_pods.go:89] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:41:42.045135   22606 system_pods.go:89] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:41:42.045144   22606 system_pods.go:89] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:41:42.045150   22606 system_pods.go:89] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:41:42.045156   22606 system_pods.go:89] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:41:42.045163   22606 system_pods.go:89] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:41:42.045167   22606 system_pods.go:89] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:41:42.045171   22606 system_pods.go:89] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:41:42.045176   22606 system_pods.go:89] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:41:42.045183   22606 system_pods.go:126] duration metric: took 206.193234ms to wait for k8s-apps to be running ...
	I0719 14:41:42.045192   22606 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 14:41:42.045247   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:41:42.061913   22606 system_svc.go:56] duration metric: took 16.713203ms WaitForService to wait for kubelet
	I0719 14:41:42.061937   22606 kubeadm.go:582] duration metric: took 22.163877158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:41:42.061956   22606 node_conditions.go:102] verifying NodePressure condition ...
	I0719 14:41:42.235374   22606 request.go:629] Waited for 173.359298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0719 14:41:42.235445   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0719 14:41:42.235452   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:42.235462   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:42.235468   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:42.239088   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:42.240094   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:41:42.240129   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:41:42.240143   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:41:42.240148   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:41:42.240154   22606 node_conditions.go:105] duration metric: took 178.193733ms to run NodePressure ...
	I0719 14:41:42.240175   22606 start.go:241] waiting for startup goroutines ...
	I0719 14:41:42.240205   22606 start.go:255] writing updated cluster config ...
	I0719 14:41:42.242285   22606 out.go:177] 
	I0719 14:41:42.243680   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:41:42.243768   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:41:42.245236   22606 out.go:177] * Starting "ha-999305-m03" control-plane node in "ha-999305" cluster
	I0719 14:41:42.246373   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:41:42.246397   22606 cache.go:56] Caching tarball of preloaded images
	I0719 14:41:42.246500   22606 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:41:42.246510   22606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:41:42.246584   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:41:42.246726   22606 start.go:360] acquireMachinesLock for ha-999305-m03: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:41:42.246762   22606 start.go:364] duration metric: took 20.176µs to acquireMachinesLock for "ha-999305-m03"
	I0719 14:41:42.246777   22606 start.go:93] Provisioning new machine with config: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:41:42.246868   22606 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0719 14:41:42.248167   22606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 14:41:42.248230   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:41:42.248257   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:41:42.265009   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37731
	I0719 14:41:42.265465   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:41:42.266011   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:41:42.266037   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:41:42.266401   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:41:42.266599   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:41:42.266789   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:41:42.266971   22606 start.go:159] libmachine.API.Create for "ha-999305" (driver="kvm2")
	I0719 14:41:42.267000   22606 client.go:168] LocalClient.Create starting
	I0719 14:41:42.267036   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:41:42.267079   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:41:42.267101   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:41:42.267192   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:41:42.267222   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:41:42.267241   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:41:42.267269   22606 main.go:141] libmachine: Running pre-create checks...
	I0719 14:41:42.267280   22606 main.go:141] libmachine: (ha-999305-m03) Calling .PreCreateCheck
	I0719 14:41:42.267533   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetConfigRaw
	I0719 14:41:42.267945   22606 main.go:141] libmachine: Creating machine...
	I0719 14:41:42.267958   22606 main.go:141] libmachine: (ha-999305-m03) Calling .Create
	I0719 14:41:42.268123   22606 main.go:141] libmachine: (ha-999305-m03) Creating KVM machine...
	I0719 14:41:42.269508   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found existing default KVM network
	I0719 14:41:42.269686   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found existing private KVM network mk-ha-999305
	I0719 14:41:42.269934   22606 main.go:141] libmachine: (ha-999305-m03) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03 ...
	I0719 14:41:42.269959   22606 main.go:141] libmachine: (ha-999305-m03) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:41:42.270050   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.269929   23632 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:41:42.270149   22606 main.go:141] libmachine: (ha-999305-m03) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:41:42.492863   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.492735   23632 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa...
	I0719 14:41:42.536529   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.536434   23632 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/ha-999305-m03.rawdisk...
	I0719 14:41:42.536555   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Writing magic tar header
	I0719 14:41:42.536567   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Writing SSH key tar header
	I0719 14:41:42.536677   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.536582   23632 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03 ...
	I0719 14:41:42.536705   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03 (perms=drwx------)
	I0719 14:41:42.536712   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03
	I0719 14:41:42.536743   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:41:42.536767   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:41:42.536779   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:41:42.536788   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:41:42.536798   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:41:42.536813   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:41:42.536823   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:41:42.536835   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home
	I0719 14:41:42.536847   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Skipping /home - not owner
	I0719 14:41:42.536860   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:41:42.536873   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:41:42.536884   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:41:42.536892   22606 main.go:141] libmachine: (ha-999305-m03) Creating domain...
	I0719 14:41:42.537776   22606 main.go:141] libmachine: (ha-999305-m03) define libvirt domain using xml: 
	I0719 14:41:42.537798   22606 main.go:141] libmachine: (ha-999305-m03) <domain type='kvm'>
	I0719 14:41:42.537810   22606 main.go:141] libmachine: (ha-999305-m03)   <name>ha-999305-m03</name>
	I0719 14:41:42.537822   22606 main.go:141] libmachine: (ha-999305-m03)   <memory unit='MiB'>2200</memory>
	I0719 14:41:42.537845   22606 main.go:141] libmachine: (ha-999305-m03)   <vcpu>2</vcpu>
	I0719 14:41:42.537866   22606 main.go:141] libmachine: (ha-999305-m03)   <features>
	I0719 14:41:42.537876   22606 main.go:141] libmachine: (ha-999305-m03)     <acpi/>
	I0719 14:41:42.537886   22606 main.go:141] libmachine: (ha-999305-m03)     <apic/>
	I0719 14:41:42.537896   22606 main.go:141] libmachine: (ha-999305-m03)     <pae/>
	I0719 14:41:42.537901   22606 main.go:141] libmachine: (ha-999305-m03)     
	I0719 14:41:42.537909   22606 main.go:141] libmachine: (ha-999305-m03)   </features>
	I0719 14:41:42.537914   22606 main.go:141] libmachine: (ha-999305-m03)   <cpu mode='host-passthrough'>
	I0719 14:41:42.537920   22606 main.go:141] libmachine: (ha-999305-m03)   
	I0719 14:41:42.537925   22606 main.go:141] libmachine: (ha-999305-m03)   </cpu>
	I0719 14:41:42.537931   22606 main.go:141] libmachine: (ha-999305-m03)   <os>
	I0719 14:41:42.537945   22606 main.go:141] libmachine: (ha-999305-m03)     <type>hvm</type>
	I0719 14:41:42.537958   22606 main.go:141] libmachine: (ha-999305-m03)     <boot dev='cdrom'/>
	I0719 14:41:42.537968   22606 main.go:141] libmachine: (ha-999305-m03)     <boot dev='hd'/>
	I0719 14:41:42.537981   22606 main.go:141] libmachine: (ha-999305-m03)     <bootmenu enable='no'/>
	I0719 14:41:42.537990   22606 main.go:141] libmachine: (ha-999305-m03)   </os>
	I0719 14:41:42.537998   22606 main.go:141] libmachine: (ha-999305-m03)   <devices>
	I0719 14:41:42.538005   22606 main.go:141] libmachine: (ha-999305-m03)     <disk type='file' device='cdrom'>
	I0719 14:41:42.538014   22606 main.go:141] libmachine: (ha-999305-m03)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/boot2docker.iso'/>
	I0719 14:41:42.538025   22606 main.go:141] libmachine: (ha-999305-m03)       <target dev='hdc' bus='scsi'/>
	I0719 14:41:42.538035   22606 main.go:141] libmachine: (ha-999305-m03)       <readonly/>
	I0719 14:41:42.538045   22606 main.go:141] libmachine: (ha-999305-m03)     </disk>
	I0719 14:41:42.538058   22606 main.go:141] libmachine: (ha-999305-m03)     <disk type='file' device='disk'>
	I0719 14:41:42.538069   22606 main.go:141] libmachine: (ha-999305-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:41:42.538124   22606 main.go:141] libmachine: (ha-999305-m03)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/ha-999305-m03.rawdisk'/>
	I0719 14:41:42.538150   22606 main.go:141] libmachine: (ha-999305-m03)       <target dev='hda' bus='virtio'/>
	I0719 14:41:42.538164   22606 main.go:141] libmachine: (ha-999305-m03)     </disk>
	I0719 14:41:42.538176   22606 main.go:141] libmachine: (ha-999305-m03)     <interface type='network'>
	I0719 14:41:42.538190   22606 main.go:141] libmachine: (ha-999305-m03)       <source network='mk-ha-999305'/>
	I0719 14:41:42.538200   22606 main.go:141] libmachine: (ha-999305-m03)       <model type='virtio'/>
	I0719 14:41:42.538210   22606 main.go:141] libmachine: (ha-999305-m03)     </interface>
	I0719 14:41:42.538221   22606 main.go:141] libmachine: (ha-999305-m03)     <interface type='network'>
	I0719 14:41:42.538252   22606 main.go:141] libmachine: (ha-999305-m03)       <source network='default'/>
	I0719 14:41:42.538268   22606 main.go:141] libmachine: (ha-999305-m03)       <model type='virtio'/>
	I0719 14:41:42.538277   22606 main.go:141] libmachine: (ha-999305-m03)     </interface>
	I0719 14:41:42.538285   22606 main.go:141] libmachine: (ha-999305-m03)     <serial type='pty'>
	I0719 14:41:42.538296   22606 main.go:141] libmachine: (ha-999305-m03)       <target port='0'/>
	I0719 14:41:42.538305   22606 main.go:141] libmachine: (ha-999305-m03)     </serial>
	I0719 14:41:42.538311   22606 main.go:141] libmachine: (ha-999305-m03)     <console type='pty'>
	I0719 14:41:42.538318   22606 main.go:141] libmachine: (ha-999305-m03)       <target type='serial' port='0'/>
	I0719 14:41:42.538323   22606 main.go:141] libmachine: (ha-999305-m03)     </console>
	I0719 14:41:42.538328   22606 main.go:141] libmachine: (ha-999305-m03)     <rng model='virtio'>
	I0719 14:41:42.538334   22606 main.go:141] libmachine: (ha-999305-m03)       <backend model='random'>/dev/random</backend>
	I0719 14:41:42.538342   22606 main.go:141] libmachine: (ha-999305-m03)     </rng>
	I0719 14:41:42.538347   22606 main.go:141] libmachine: (ha-999305-m03)     
	I0719 14:41:42.538351   22606 main.go:141] libmachine: (ha-999305-m03)     
	I0719 14:41:42.538357   22606 main.go:141] libmachine: (ha-999305-m03)   </devices>
	I0719 14:41:42.538362   22606 main.go:141] libmachine: (ha-999305-m03) </domain>
	I0719 14:41:42.538369   22606 main.go:141] libmachine: (ha-999305-m03) 
	I0719 14:41:42.544894   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:ed:64:e5 in network default
	I0719 14:41:42.545450   22606 main.go:141] libmachine: (ha-999305-m03) Ensuring networks are active...
	I0719 14:41:42.545465   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:42.546126   22606 main.go:141] libmachine: (ha-999305-m03) Ensuring network default is active
	I0719 14:41:42.546439   22606 main.go:141] libmachine: (ha-999305-m03) Ensuring network mk-ha-999305 is active
	I0719 14:41:42.546752   22606 main.go:141] libmachine: (ha-999305-m03) Getting domain xml...
	I0719 14:41:42.547397   22606 main.go:141] libmachine: (ha-999305-m03) Creating domain...
	I0719 14:41:43.766414   22606 main.go:141] libmachine: (ha-999305-m03) Waiting to get IP...
	I0719 14:41:43.767154   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:43.767504   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:43.767542   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:43.767479   23632 retry.go:31] will retry after 296.827647ms: waiting for machine to come up
	I0719 14:41:44.065979   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:44.066451   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:44.066481   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:44.066421   23632 retry.go:31] will retry after 340.383239ms: waiting for machine to come up
	I0719 14:41:44.407886   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:44.408379   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:44.408402   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:44.408310   23632 retry.go:31] will retry after 352.464502ms: waiting for machine to come up
	I0719 14:41:44.762806   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:44.763245   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:44.763266   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:44.763205   23632 retry.go:31] will retry after 583.331034ms: waiting for machine to come up
	I0719 14:41:45.348677   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:45.349016   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:45.349043   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:45.348905   23632 retry.go:31] will retry after 613.461603ms: waiting for machine to come up
	I0719 14:41:45.963853   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:45.964231   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:45.964262   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:45.964195   23632 retry.go:31] will retry after 690.125797ms: waiting for machine to come up
	I0719 14:41:46.656206   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:46.656663   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:46.656696   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:46.656597   23632 retry.go:31] will retry after 839.358911ms: waiting for machine to come up
	I0719 14:41:47.497863   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:47.498300   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:47.498329   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:47.498254   23632 retry.go:31] will retry after 1.407821443s: waiting for machine to come up
	I0719 14:41:48.907371   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:48.907819   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:48.907850   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:48.907772   23632 retry.go:31] will retry after 1.178162674s: waiting for machine to come up
	I0719 14:41:50.087798   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:50.088232   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:50.088258   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:50.088188   23632 retry.go:31] will retry after 1.754275136s: waiting for machine to come up
	I0719 14:41:51.844373   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:51.844818   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:51.844846   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:51.844772   23632 retry.go:31] will retry after 2.508819786s: waiting for machine to come up
	I0719 14:41:54.355224   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:54.355670   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:54.355719   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:54.355604   23632 retry.go:31] will retry after 2.253850604s: waiting for machine to come up
	I0719 14:41:56.611405   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:56.611834   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:56.611864   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:56.611788   23632 retry.go:31] will retry after 2.874253079s: waiting for machine to come up
	I0719 14:41:59.487290   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:59.487672   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:59.487696   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:59.487621   23632 retry.go:31] will retry after 5.378647907s: waiting for machine to come up
	I0719 14:42:04.870101   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.870403   22606 main.go:141] libmachine: (ha-999305-m03) Found IP for machine: 192.168.39.250
	I0719 14:42:04.870438   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.870448   22606 main.go:141] libmachine: (ha-999305-m03) Reserving static IP address...
	I0719 14:42:04.870941   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find host DHCP lease matching {name: "ha-999305-m03", mac: "52:54:00:c6:46:fe", ip: "192.168.39.250"} in network mk-ha-999305
	I0719 14:42:04.945778   22606 main.go:141] libmachine: (ha-999305-m03) Reserved static IP address: 192.168.39.250
	I0719 14:42:04.945812   22606 main.go:141] libmachine: (ha-999305-m03) Waiting for SSH to be available...
	I0719 14:42:04.945823   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Getting to WaitForSSH function...
	I0719 14:42:04.948263   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.948701   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:04.948731   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.948985   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Using SSH client type: external
	I0719 14:42:04.949012   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa (-rw-------)
	I0719 14:42:04.949039   22606 main.go:141] libmachine: (ha-999305-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:42:04.949052   22606 main.go:141] libmachine: (ha-999305-m03) DBG | About to run SSH command:
	I0719 14:42:04.949067   22606 main.go:141] libmachine: (ha-999305-m03) DBG | exit 0
	I0719 14:42:05.074442   22606 main.go:141] libmachine: (ha-999305-m03) DBG | SSH cmd err, output: <nil>: 
	I0719 14:42:05.074712   22606 main.go:141] libmachine: (ha-999305-m03) KVM machine creation complete!
	I0719 14:42:05.075089   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetConfigRaw
	I0719 14:42:05.075750   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:05.075978   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:05.076147   22606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:42:05.076165   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:42:05.077511   22606 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:42:05.077527   22606 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:42:05.077539   22606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:42:05.077547   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.080484   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.080789   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.080804   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.081001   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.081210   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.081363   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.081515   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.081697   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.082082   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.082100   22606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:42:05.189746   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:42:05.189772   22606 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:42:05.189782   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.192677   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.193131   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.193164   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.193320   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.193514   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.193723   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.193878   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.194084   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.194320   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.194334   22606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:42:05.303539   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:42:05.303686   22606 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:42:05.303701   22606 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:42:05.303713   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:42:05.304029   22606 buildroot.go:166] provisioning hostname "ha-999305-m03"
	I0719 14:42:05.304061   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:42:05.304285   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.306863   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.307333   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.307356   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.307579   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.307778   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.307946   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.308116   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.308289   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.308441   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.308452   22606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305-m03 && echo "ha-999305-m03" | sudo tee /etc/hostname
	I0719 14:42:05.429934   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305-m03
	
	I0719 14:42:05.429966   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.432693   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.433046   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.433072   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.433200   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.433399   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.433605   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.433743   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.433892   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.434059   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.434075   22606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:42:05.552429   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:42:05.552466   22606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:42:05.552487   22606 buildroot.go:174] setting up certificates
	I0719 14:42:05.552513   22606 provision.go:84] configureAuth start
	I0719 14:42:05.552531   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:42:05.552853   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:05.555930   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.556337   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.556365   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.556681   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.559529   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.559930   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.559953   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.560153   22606 provision.go:143] copyHostCerts
	I0719 14:42:05.560190   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:42:05.560227   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:42:05.560235   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:42:05.560315   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:42:05.560407   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:42:05.560427   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:42:05.560433   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:42:05.560467   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:42:05.560525   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:42:05.560542   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:42:05.560549   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:42:05.560584   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:42:05.560650   22606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305-m03 san=[127.0.0.1 192.168.39.250 ha-999305-m03 localhost minikube]
	I0719 14:42:05.673075   22606 provision.go:177] copyRemoteCerts
	I0719 14:42:05.673145   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:42:05.673170   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.676252   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.676673   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.676707   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.676885   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.677117   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.677285   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.677425   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:05.762105   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:42:05.762191   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:42:05.787706   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:42:05.787782   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 14:42:05.812938   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:42:05.813014   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 14:42:05.838283   22606 provision.go:87] duration metric: took 285.753561ms to configureAuth
	I0719 14:42:05.838310   22606 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:42:05.838652   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:42:05.838799   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.841681   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.842048   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.842078   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.842218   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.842439   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.842591   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.842829   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.842976   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.843178   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.843198   22606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:42:06.128093   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:42:06.128122   22606 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:42:06.128130   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetURL
	I0719 14:42:06.129415   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Using libvirt version 6000000
	I0719 14:42:06.131535   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.131940   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.131966   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.132200   22606 main.go:141] libmachine: Docker is up and running!
	I0719 14:42:06.132220   22606 main.go:141] libmachine: Reticulating splines...
	I0719 14:42:06.132229   22606 client.go:171] duration metric: took 23.865221578s to LocalClient.Create
	I0719 14:42:06.132261   22606 start.go:167] duration metric: took 23.865291689s to libmachine.API.Create "ha-999305"
	I0719 14:42:06.132271   22606 start.go:293] postStartSetup for "ha-999305-m03" (driver="kvm2")
	I0719 14:42:06.132286   22606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:42:06.132307   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.132514   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:42:06.132538   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:06.134621   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.134905   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.134927   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.135015   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.135187   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.135375   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.135551   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:06.221748   22606 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:42:06.226464   22606 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:42:06.226496   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:42:06.226580   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:42:06.226667   22606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:42:06.226677   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:42:06.226755   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:42:06.237126   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:42:06.263232   22606 start.go:296] duration metric: took 130.946805ms for postStartSetup
	I0719 14:42:06.263277   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetConfigRaw
	I0719 14:42:06.263869   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:06.266688   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.267104   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.267132   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.267479   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:42:06.267735   22606 start.go:128] duration metric: took 24.020856532s to createHost
	I0719 14:42:06.267769   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:06.270465   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.270837   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.270874   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.271037   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.271227   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.271375   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.271533   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.271706   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:06.271912   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:06.271926   22606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:42:06.383378   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721400126.348944461
	
	I0719 14:42:06.383403   22606 fix.go:216] guest clock: 1721400126.348944461
	I0719 14:42:06.383413   22606 fix.go:229] Guest: 2024-07-19 14:42:06.348944461 +0000 UTC Remote: 2024-07-19 14:42:06.267751669 +0000 UTC m=+218.535262765 (delta=81.192792ms)
	I0719 14:42:06.383440   22606 fix.go:200] guest clock delta is within tolerance: 81.192792ms
	I0719 14:42:06.383448   22606 start.go:83] releasing machines lock for "ha-999305-m03", held for 24.136678926s
	I0719 14:42:06.383487   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.383737   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:06.386212   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.386715   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.386746   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.389297   22606 out.go:177] * Found network options:
	I0719 14:42:06.390873   22606 out.go:177]   - NO_PROXY=192.168.39.240,192.168.39.163
	W0719 14:42:06.392060   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 14:42:06.392081   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:42:06.392095   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.392741   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.392926   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.393040   22606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:42:06.393082   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	W0719 14:42:06.393108   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 14:42:06.393135   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:42:06.393230   22606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:42:06.393250   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:06.395892   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396195   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396241   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.396267   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396361   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.396532   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.396717   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.396749   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.396776   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396879   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:06.396948   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.397076   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.397192   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.397435   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:06.651418   22606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:42:06.657681   22606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:42:06.657740   22606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:42:06.674396   22606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:42:06.674429   22606 start.go:495] detecting cgroup driver to use...
	I0719 14:42:06.674519   22606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:42:06.693586   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:42:06.709626   22606 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:42:06.709705   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:42:06.726709   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:42:06.742662   22606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:42:06.869913   22606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:42:07.020252   22606 docker.go:233] disabling docker service ...
	I0719 14:42:07.020311   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:42:07.036261   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:42:07.050577   22606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:42:07.211233   22606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:42:07.331892   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:42:07.347994   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:42:07.369093   22606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:42:07.369157   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.380134   22606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:42:07.380206   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.392471   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.404677   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.417011   22606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:42:07.429508   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.441319   22606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.460150   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.471989   22606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:42:07.482871   22606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:42:07.482944   22606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:42:07.498590   22606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:42:07.509676   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:42:07.619316   22606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:42:07.774139   22606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:42:07.774222   22606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:42:07.780154   22606 start.go:563] Will wait 60s for crictl version
	I0719 14:42:07.780218   22606 ssh_runner.go:195] Run: which crictl
	I0719 14:42:07.784105   22606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:42:07.826224   22606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:42:07.826315   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:42:07.854718   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:42:07.886427   22606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:42:07.887867   22606 out.go:177]   - env NO_PROXY=192.168.39.240
	I0719 14:42:07.889136   22606 out.go:177]   - env NO_PROXY=192.168.39.240,192.168.39.163
	I0719 14:42:07.890351   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:07.894403   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:07.894758   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:07.894774   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:07.895059   22606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:42:07.899593   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:42:07.913074   22606 mustload.go:65] Loading cluster: ha-999305
	I0719 14:42:07.913366   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:42:07.913802   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:42:07.913856   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:42:07.930755   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0719 14:42:07.931215   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:42:07.931704   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:42:07.931726   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:42:07.932078   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:42:07.932232   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:42:07.933780   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:42:07.934078   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:42:07.934117   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:42:07.949546   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0719 14:42:07.949968   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:42:07.950463   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:42:07.950490   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:42:07.950817   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:42:07.951027   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:42:07.951239   22606 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.250
	I0719 14:42:07.951256   22606 certs.go:194] generating shared ca certs ...
	I0719 14:42:07.951291   22606 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:42:07.951521   22606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:42:07.951573   22606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:42:07.951583   22606 certs.go:256] generating profile certs ...
	I0719 14:42:07.951694   22606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:42:07.951723   22606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9
	I0719 14:42:07.951741   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.163 192.168.39.250 192.168.39.254]
	I0719 14:42:08.155558   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9 ...
	I0719 14:42:08.155589   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9: {Name:mka66c74b7110ebe18159f5d744d4156e88f5f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:42:08.155770   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9 ...
	I0719 14:42:08.155784   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9: {Name:mk29bd0294b90a74ad2dd8700ab0de425474ddd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:42:08.155865   22606 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:42:08.156048   22606 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:42:08.156233   22606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:42:08.156254   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:42:08.156274   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:42:08.156291   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:42:08.156309   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:42:08.156325   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:42:08.156345   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:42:08.156364   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:42:08.156381   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:42:08.156450   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:42:08.156492   22606 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:42:08.156502   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:42:08.156532   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:42:08.156563   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:42:08.156590   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:42:08.156641   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:42:08.156682   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.156707   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.156723   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.156762   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:42:08.159987   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:08.160401   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:42:08.160424   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:08.160647   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:42:08.160840   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:42:08.161020   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:42:08.161118   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:42:08.234677   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 14:42:08.240082   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 14:42:08.253164   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 14:42:08.258475   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 14:42:08.272435   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 14:42:08.278073   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 14:42:08.299545   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 14:42:08.305145   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 14:42:08.317125   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 14:42:08.323174   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 14:42:08.336766   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 14:42:08.341856   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 14:42:08.353945   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:42:08.381045   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:42:08.406809   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:42:08.432257   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:42:08.458849   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 14:42:08.484505   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 14:42:08.508480   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:42:08.534839   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:42:08.561120   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:42:08.586976   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:42:08.612690   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:42:08.638273   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 14:42:08.655516   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 14:42:08.672081   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 14:42:08.691302   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 14:42:08.711127   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 14:42:08.729620   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 14:42:08.749185   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 14:42:08.769354   22606 ssh_runner.go:195] Run: openssl version
	I0719 14:42:08.776342   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:42:08.788591   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.793899   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.793959   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.800347   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:42:08.812460   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:42:08.825067   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.830454   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.830523   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.836894   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:42:08.848857   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:42:08.860838   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.866461   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.866518   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.872768   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:42:08.884003   22606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:42:08.888485   22606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:42:08.888541   22606 kubeadm.go:934] updating node {m03 192.168.39.250 8443 v1.30.3 crio true true} ...
	I0719 14:42:08.888649   22606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:42:08.888685   22606 kube-vip.go:115] generating kube-vip config ...
	I0719 14:42:08.888729   22606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:42:08.904318   22606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:42:08.904380   22606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:42:08.904433   22606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:42:08.915153   22606 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 14:42:08.915205   22606 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 14:42:08.925502   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 14:42:08.925525   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:42:08.925526   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 14:42:08.925535   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 14:42:08.925551   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:42:08.925567   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:42:08.925577   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:42:08.925614   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:42:08.933493   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 14:42:08.933522   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 14:42:08.933534   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 14:42:08.933558   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 14:42:08.963552   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:42:08.963668   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:42:09.088117   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 14:42:09.088163   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 14:42:09.905347   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 14:42:09.917595   22606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 14:42:09.935190   22606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:42:09.953121   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 14:42:09.970645   22606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:42:09.974872   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:42:09.988254   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:42:10.123875   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:42:10.144548   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:42:10.145162   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:42:10.145230   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:42:10.160984   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0719 14:42:10.161371   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:42:10.161822   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:42:10.161844   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:42:10.162156   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:42:10.162353   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:42:10.162487   22606 start.go:317] joinCluster: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:42:10.162642   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 14:42:10.162661   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:42:10.165562   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:10.165975   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:42:10.166008   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:10.166168   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:42:10.166350   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:42:10.166501   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:42:10.166618   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:42:10.326567   22606 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:42:10.326703   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmmbuf.uyj5wommz39npgzn --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0719 14:42:33.887098   22606 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmmbuf.uyj5wommz39npgzn --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (23.56035373s)
	I0719 14:42:33.887135   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 14:42:34.455281   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-999305-m03 minikube.k8s.io/updated_at=2024_07_19T14_42_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=ha-999305 minikube.k8s.io/primary=false
	I0719 14:42:34.618715   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-999305-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 14:42:34.753210   22606 start.go:319] duration metric: took 24.590729029s to joinCluster
	I0719 14:42:34.753292   22606 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:42:34.753782   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:42:34.754753   22606 out.go:177] * Verifying Kubernetes components...
	I0719 14:42:34.755976   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:42:34.939090   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:42:34.955163   22606 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:42:34.955521   22606 kapi.go:59] client config for ha-999305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 14:42:34.955631   22606 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0719 14:42:34.955993   22606 node_ready.go:35] waiting up to 6m0s for node "ha-999305-m03" to be "Ready" ...
	I0719 14:42:34.956124   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:34.956135   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:34.956147   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:34.956153   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:34.959751   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:35.457117   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:35.457141   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:35.457152   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:35.457156   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:35.461180   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:35.957115   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:35.957142   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:35.957153   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:35.957159   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:35.960910   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:36.457092   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:36.457119   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:36.457130   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:36.457136   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:36.460977   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:36.956422   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:36.956446   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:36.956458   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:36.956466   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:36.961459   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:36.962412   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:37.456187   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:37.456209   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:37.456218   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:37.456224   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:37.460123   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:37.957077   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:37.957097   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:37.957108   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:37.957113   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:37.965507   22606 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 14:42:38.457128   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:38.457152   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:38.457161   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:38.457166   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:38.460458   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:38.957157   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:38.957182   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:38.957192   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:38.957199   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:38.960795   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:39.457193   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:39.457216   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:39.457227   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:39.457233   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:39.460706   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:39.462038   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:39.957050   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:39.957073   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:39.957085   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:39.957090   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:39.961127   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:40.456852   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:40.456877   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:40.456894   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:40.456902   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:40.461231   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:40.957007   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:40.957027   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:40.957033   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:40.957037   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:40.960618   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:41.456339   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:41.456406   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:41.456421   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:41.456425   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:41.460713   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:41.957108   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:41.957132   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:41.957140   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:41.957145   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:41.960629   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:41.961502   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:42.457049   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:42.457072   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:42.457090   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:42.457094   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:42.461328   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:42.956821   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:42.956848   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:42.956862   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:42.956867   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:42.959890   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:43.456347   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:43.456371   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:43.456379   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:43.456382   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:43.460228   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:43.957200   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:43.957227   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:43.957247   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:43.957252   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:43.960794   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:44.456739   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:44.456760   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:44.456768   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:44.456772   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:44.460098   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:44.460570   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:44.957076   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:44.957103   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:44.957114   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:44.957122   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:44.960760   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:45.456191   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:45.456219   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:45.456228   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:45.456233   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:45.462116   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:42:45.956610   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:45.956631   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:45.956639   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:45.956642   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:45.959898   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:46.456872   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:46.456898   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:46.456906   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:46.456909   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:46.460426   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:46.461232   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:46.956939   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:46.956962   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:46.956973   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:46.956977   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:46.960379   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:47.456802   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:47.456827   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:47.456835   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:47.456839   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:47.460272   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:47.956737   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:47.956759   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:47.956766   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:47.956769   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:47.960457   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:48.456702   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:48.456726   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:48.456740   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:48.456744   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:48.459473   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:48.956908   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:48.956947   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:48.956958   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:48.956962   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:48.960089   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:48.960793   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:49.456957   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:49.456981   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:49.456991   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:49.456996   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:49.461129   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:49.956657   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:49.956708   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:49.956721   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:49.956727   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:49.959955   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:50.456639   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:50.456663   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:50.456670   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:50.456675   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:50.463031   22606 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 14:42:50.957102   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:50.957126   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:50.957137   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:50.957146   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:50.960456   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:50.961262   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:51.456607   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:51.456633   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.456643   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.456651   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.460171   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:51.956523   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:51.956552   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.956564   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.956571   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.960312   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:51.960845   22606 node_ready.go:49] node "ha-999305-m03" has status "Ready":"True"
	I0719 14:42:51.960866   22606 node_ready.go:38] duration metric: took 17.004855917s for node "ha-999305-m03" to be "Ready" ...
	I0719 14:42:51.960877   22606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:42:51.960946   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:51.960954   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.960961   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.960965   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.966819   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:42:51.974936   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.975026   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9sxgr
	I0719 14:42:51.975038   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.975048   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.975052   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.977986   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.978835   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:51.978851   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.978861   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.978868   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.981993   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:51.982529   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:51.982551   22606 pod_ready.go:81] duration metric: took 7.586598ms for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.982569   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.982644   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gtwxd
	I0719 14:42:51.982656   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.982665   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.982676   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.985021   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.985638   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:51.985653   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.985658   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.985661   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.988327   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.988790   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:51.988806   22606 pod_ready.go:81] duration metric: took 6.22847ms for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.988818   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.988886   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305
	I0719 14:42:51.988897   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.988907   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.988914   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.991620   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.992191   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:51.992207   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.992214   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.992220   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.994617   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.995050   22606 pod_ready.go:92] pod "etcd-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:51.995070   22606 pod_ready.go:81] duration metric: took 6.240102ms for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.995081   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.995154   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m02
	I0719 14:42:51.995165   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.995184   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.995193   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.997965   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.998609   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:51.998623   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.998630   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.998633   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.002349   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.002850   22606 pod_ready.go:92] pod "etcd-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:52.002872   22606 pod_ready.go:81] duration metric: took 7.767749ms for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.002883   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.157318   22606 request.go:629] Waited for 154.360427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m03
	I0719 14:42:52.157390   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m03
	I0719 14:42:52.157398   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.157406   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.157409   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.161535   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:52.356970   22606 request.go:629] Waited for 194.29248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:52.357052   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:52.357064   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.357075   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.357083   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.360523   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.361168   22606 pod_ready.go:92] pod "etcd-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:52.361187   22606 pod_ready.go:81] duration metric: took 358.296734ms for pod "etcd-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.361202   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.557398   22606 request.go:629] Waited for 196.137818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:42:52.557472   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:42:52.557479   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.557487   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.557495   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.561209   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.757252   22606 request.go:629] Waited for 195.355592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:52.757304   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:52.757309   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.757316   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.757320   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.760530   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.761162   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:52.761184   22606 pod_ready.go:81] duration metric: took 399.974493ms for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.761196   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.956945   22606 request.go:629] Waited for 195.673996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:42:52.957033   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:42:52.957045   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.957057   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.957066   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.960450   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.156508   22606 request.go:629] Waited for 195.301603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:53.156574   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:53.156580   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.156587   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.156592   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.159883   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.160421   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:53.160437   22606 pod_ready.go:81] duration metric: took 399.233428ms for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.160446   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.357066   22606 request.go:629] Waited for 196.550702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m03
	I0719 14:42:53.357162   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m03
	I0719 14:42:53.357170   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.357181   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.357189   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.364480   22606 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 14:42:53.557462   22606 request.go:629] Waited for 192.390414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:53.557546   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:53.557555   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.557563   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.557568   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.561614   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:53.562163   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:53.562183   22606 pod_ready.go:81] duration metric: took 401.730871ms for pod "kube-apiserver-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.562196   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.757278   22606 request.go:629] Waited for 195.000821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:42:53.757351   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:42:53.757359   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.757370   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.757380   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.760743   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.956524   22606 request.go:629] Waited for 194.666271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:53.956588   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:53.956593   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.956600   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.956604   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.960380   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.960877   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:53.960900   22606 pod_ready.go:81] duration metric: took 398.69165ms for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.960914   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.156997   22606 request.go:629] Waited for 195.992358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:42:54.157052   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:42:54.157057   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.157064   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.157071   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.160744   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:54.356667   22606 request.go:629] Waited for 195.278383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:54.356720   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:54.356726   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.356736   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.356741   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.359947   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:54.360609   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:54.360626   22606 pod_ready.go:81] duration metric: took 399.705128ms for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.360636   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.556819   22606 request.go:629] Waited for 196.1253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m03
	I0719 14:42:54.556894   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m03
	I0719 14:42:54.556899   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.556907   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.556914   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.560662   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:54.756570   22606 request.go:629] Waited for 195.272157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:54.756653   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:54.756662   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.756675   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.756682   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.759662   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:54.760172   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:54.760188   22606 pod_ready.go:81] duration metric: took 399.546786ms for pod "kube-controller-manager-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.760199   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.957335   22606 request.go:629] Waited for 197.078407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:42:54.957412   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:42:54.957419   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.957429   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.957435   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.960975   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.157039   22606 request.go:629] Waited for 195.391212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:55.157098   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:55.157105   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.157117   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.157123   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.160448   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.161110   22606 pod_ready.go:92] pod "kube-proxy-766sx" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:55.161130   22606 pod_ready.go:81] duration metric: took 400.924486ms for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.161139   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.356583   22606 request.go:629] Waited for 195.367291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:42:55.356643   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:42:55.356648   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.356655   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.356661   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.362651   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:42:55.556973   22606 request.go:629] Waited for 193.379237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:55.557038   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:55.557043   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.557051   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.557055   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.560246   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.560961   22606 pod_ready.go:92] pod "kube-proxy-s2wb7" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:55.560982   22606 pod_ready.go:81] duration metric: took 399.837176ms for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.560993   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-twh47" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.757035   22606 request.go:629] Waited for 195.977548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-twh47
	I0719 14:42:55.757099   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-twh47
	I0719 14:42:55.757106   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.757117   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.757123   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.760958   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.957308   22606 request.go:629] Waited for 195.431235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:55.957386   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:55.957393   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.957401   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.957408   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.961039   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.961994   22606 pod_ready.go:92] pod "kube-proxy-twh47" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:55.962010   22606 pod_ready.go:81] duration metric: took 401.011812ms for pod "kube-proxy-twh47" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.962019   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.157218   22606 request.go:629] Waited for 195.136362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:42:56.157296   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:42:56.157303   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.157311   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.157317   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.160596   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.357526   22606 request.go:629] Waited for 196.357454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:56.357593   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:56.357604   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.357616   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.357622   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.360940   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.361494   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:56.361511   22606 pod_ready.go:81] duration metric: took 399.485902ms for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.361520   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.556614   22606 request.go:629] Waited for 195.031893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:42:56.556681   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:42:56.556690   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.556697   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.556703   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.560254   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.757456   22606 request.go:629] Waited for 196.362234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:56.757542   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:56.757554   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.757563   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.757573   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.760801   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.761609   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:56.761626   22606 pod_ready.go:81] duration metric: took 400.100607ms for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.761634   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.956788   22606 request.go:629] Waited for 195.098635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m03
	I0719 14:42:56.956861   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m03
	I0719 14:42:56.956867   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.956874   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.956881   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.959944   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.157059   22606 request.go:629] Waited for 196.355561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:57.157120   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:57.157135   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.157143   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.157146   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.160298   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.161973   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:57.161993   22606 pod_ready.go:81] duration metric: took 400.352789ms for pod "kube-scheduler-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:57.162005   22606 pod_ready.go:38] duration metric: took 5.2011011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:42:57.162024   22606 api_server.go:52] waiting for apiserver process to appear ...
	I0719 14:42:57.162077   22606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:42:57.180381   22606 api_server.go:72] duration metric: took 22.427053068s to wait for apiserver process to appear ...
	I0719 14:42:57.180398   22606 api_server.go:88] waiting for apiserver healthz status ...
	I0719 14:42:57.180420   22606 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0719 14:42:57.184804   22606 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0719 14:42:57.184870   22606 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0719 14:42:57.184877   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.184884   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.184890   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.185745   22606 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 14:42:57.185803   22606 api_server.go:141] control plane version: v1.30.3
	I0719 14:42:57.185820   22606 api_server.go:131] duration metric: took 5.414651ms to wait for apiserver health ...
	I0719 14:42:57.185832   22606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 14:42:57.357581   22606 request.go:629] Waited for 171.672444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.357642   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.357649   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.357663   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.357671   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.365728   22606 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 14:42:57.372152   22606 system_pods.go:59] 24 kube-system pods found
	I0719 14:42:57.372186   22606 system_pods.go:61] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:42:57.372191   22606 system_pods.go:61] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:42:57.372195   22606 system_pods.go:61] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:42:57.372198   22606 system_pods.go:61] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:42:57.372203   22606 system_pods.go:61] "etcd-ha-999305-m03" [f15da934-29c7-444e-9e54-155ef0fb3145] Running
	I0719 14:42:57.372207   22606 system_pods.go:61] "kindnet-b7lvb" [fdca060a-b2bf-4c7c-aea7-289593af789f] Running
	I0719 14:42:57.372210   22606 system_pods.go:61] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:42:57.372214   22606 system_pods.go:61] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:42:57.372217   22606 system_pods.go:61] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:42:57.372222   22606 system_pods.go:61] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:42:57.372229   22606 system_pods.go:61] "kube-apiserver-ha-999305-m03" [d02979f6-fd79-424c-a802-f40f6c484689] Running
	I0719 14:42:57.372238   22606 system_pods.go:61] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:42:57.372248   22606 system_pods.go:61] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:42:57.372256   22606 system_pods.go:61] "kube-controller-manager-ha-999305-m03" [2f599812-e46f-4151-aae3-37d551e7b26e] Running
	I0719 14:42:57.372262   22606 system_pods.go:61] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:42:57.372271   22606 system_pods.go:61] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:42:57.372280   22606 system_pods.go:61] "kube-proxy-twh47" [dabe7d25-8bd8-42f8-9efd-0c800be277b3] Running
	I0719 14:42:57.372287   22606 system_pods.go:61] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:42:57.372296   22606 system_pods.go:61] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:42:57.372305   22606 system_pods.go:61] "kube-scheduler-ha-999305-m03" [ba5e9e04-3ebb-4839-8b1f-df899690be04] Running
	I0719 14:42:57.372311   22606 system_pods.go:61] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:42:57.372319   22606 system_pods.go:61] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:42:57.372325   22606 system_pods.go:61] "kube-vip-ha-999305-m03" [c47c9bb1-e77b-40a3-a92f-9702dbb222ff] Running
	I0719 14:42:57.372331   22606 system_pods.go:61] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:42:57.372343   22606 system_pods.go:74] duration metric: took 186.500633ms to wait for pod list to return data ...
	I0719 14:42:57.372357   22606 default_sa.go:34] waiting for default service account to be created ...
	I0719 14:42:57.556905   22606 request.go:629] Waited for 184.456317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:42:57.556971   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:42:57.556979   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.556991   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.557000   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.560115   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.560229   22606 default_sa.go:45] found service account: "default"
	I0719 14:42:57.560243   22606 default_sa.go:55] duration metric: took 187.875258ms for default service account to be created ...
	I0719 14:42:57.560251   22606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 14:42:57.757551   22606 request.go:629] Waited for 197.240039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.757646   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.757654   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.757663   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.757669   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.764689   22606 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 14:42:57.772768   22606 system_pods.go:86] 24 kube-system pods found
	I0719 14:42:57.772795   22606 system_pods.go:89] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:42:57.772800   22606 system_pods.go:89] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:42:57.772804   22606 system_pods.go:89] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:42:57.772809   22606 system_pods.go:89] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:42:57.772813   22606 system_pods.go:89] "etcd-ha-999305-m03" [f15da934-29c7-444e-9e54-155ef0fb3145] Running
	I0719 14:42:57.772817   22606 system_pods.go:89] "kindnet-b7lvb" [fdca060a-b2bf-4c7c-aea7-289593af789f] Running
	I0719 14:42:57.772821   22606 system_pods.go:89] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:42:57.772825   22606 system_pods.go:89] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:42:57.772829   22606 system_pods.go:89] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:42:57.772832   22606 system_pods.go:89] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:42:57.772836   22606 system_pods.go:89] "kube-apiserver-ha-999305-m03" [d02979f6-fd79-424c-a802-f40f6c484689] Running
	I0719 14:42:57.772840   22606 system_pods.go:89] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:42:57.772844   22606 system_pods.go:89] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:42:57.772849   22606 system_pods.go:89] "kube-controller-manager-ha-999305-m03" [2f599812-e46f-4151-aae3-37d551e7b26e] Running
	I0719 14:42:57.772853   22606 system_pods.go:89] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:42:57.772857   22606 system_pods.go:89] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:42:57.772862   22606 system_pods.go:89] "kube-proxy-twh47" [dabe7d25-8bd8-42f8-9efd-0c800be277b3] Running
	I0719 14:42:57.772867   22606 system_pods.go:89] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:42:57.772875   22606 system_pods.go:89] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:42:57.772879   22606 system_pods.go:89] "kube-scheduler-ha-999305-m03" [ba5e9e04-3ebb-4839-8b1f-df899690be04] Running
	I0719 14:42:57.772884   22606 system_pods.go:89] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:42:57.772889   22606 system_pods.go:89] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:42:57.772894   22606 system_pods.go:89] "kube-vip-ha-999305-m03" [c47c9bb1-e77b-40a3-a92f-9702dbb222ff] Running
	I0719 14:42:57.772898   22606 system_pods.go:89] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:42:57.772906   22606 system_pods.go:126] duration metric: took 212.648177ms to wait for k8s-apps to be running ...
	I0719 14:42:57.772915   22606 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 14:42:57.772953   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:42:57.790515   22606 system_svc.go:56] duration metric: took 17.590313ms WaitForService to wait for kubelet
	I0719 14:42:57.790544   22606 kubeadm.go:582] duration metric: took 23.037217643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:42:57.790570   22606 node_conditions.go:102] verifying NodePressure condition ...
	I0719 14:42:57.956745   22606 request.go:629] Waited for 166.090864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0719 14:42:57.956807   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0719 14:42:57.956812   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.956819   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.956826   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.960793   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.961763   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:42:57.961785   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:42:57.961802   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:42:57.961806   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:42:57.961815   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:42:57.961820   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:42:57.961823   22606 node_conditions.go:105] duration metric: took 171.248783ms to run NodePressure ...
	I0719 14:42:57.961836   22606 start.go:241] waiting for startup goroutines ...
	I0719 14:42:57.961861   22606 start.go:255] writing updated cluster config ...
	I0719 14:42:57.962141   22606 ssh_runner.go:195] Run: rm -f paused
	I0719 14:42:58.014427   22606 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 14:42:58.016272   22606 out.go:177] * Done! kubectl is now configured to use "ha-999305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.270554318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400400270492788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1648b7a-ad7a-421d-9938-ea21e0c53ffb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.271221626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85d3f31a-e270-4401-a496-785f356d7762 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.271274347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85d3f31a-e270-4401-a496-785f356d7762 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.271726300Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85d3f31a-e270-4401-a496-785f356d7762 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.311060790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45d270d2-a6d6-4ac3-8f48-b73c6b792aaa name=/runtime.v1.RuntimeService/Version
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.311167497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45d270d2-a6d6-4ac3-8f48-b73c6b792aaa name=/runtime.v1.RuntimeService/Version
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.312644977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71660cc1-184e-4a9d-b030-4bfb7471f246 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.313431606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400400313404648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71660cc1-184e-4a9d-b030-4bfb7471f246 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.314059995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=014c1b8b-ab8e-4fe4-b114-ad57a394315f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.314114246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=014c1b8b-ab8e-4fe4-b114-ad57a394315f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.314351301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=014c1b8b-ab8e-4fe4-b114-ad57a394315f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.352735297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b9761b1-72f5-4181-b9c2-7fb3db5750d1 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.353066503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b9761b1-72f5-4181-b9c2-7fb3db5750d1 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.354261481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80c7b0d7-e729-4b2b-aa18-f829abe80411 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.354667094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400400354646196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80c7b0d7-e729-4b2b-aa18-f829abe80411 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.355217122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=037405e7-8877-470f-8420-e98b8340907b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.355270192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=037405e7-8877-470f-8420-e98b8340907b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.355496273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=037405e7-8877-470f-8420-e98b8340907b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.400987386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85ca3011-b894-4e37-a243-73e25b7cb655 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.401098094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85ca3011-b894-4e37-a243-73e25b7cb655 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.403650044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39164cd4-6592-4a67-9de9-9cd159d93277 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.405373195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400400405343118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39164cd4-6592-4a67-9de9-9cd159d93277 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.406307583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=273fd451-e06f-49c3-bb9f-12c0fe887ab2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.406503320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=273fd451-e06f-49c3-bb9f-12c0fe887ab2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:46:40 ha-999305 crio[679]: time="2024-07-19 14:46:40.407008886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=273fd451-e06f-49c3-bb9f-12c0fe887ab2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d401082f94c28       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   f0b7b801c04fe       busybox-fc5497c4f-2rfw6
	7b8829b3ccfbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   6fe36c95a046d       storage-provisioner
	8a1cd64a0c897       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   35affd85abc52       coredns-7db6d8ff4d-gtwxd
	60ddffbf7c51f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   1eb500abeaf59       coredns-7db6d8ff4d-9sxgr
	f411cdcc4b000       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      7 minutes ago       Running             kindnet-cni               0                   b21ce83a41d26       kindnet-tpffr
	3df47e2e7e71d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   0bc58fc40b11b       kube-proxy-s2wb7
	f81aa97ac4ed4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   63a7f05b44c00       kube-vip-ha-999305
	4106d6aa51360       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   4fe960d43fbe4       etcd-ha-999305
	85e5d02964a27       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   7ac313e234322       kube-controller-manager-ha-999305
	eea532e07ff56       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   01e1ea6c3d6e9       kube-scheduler-ha-999305
	21f9837a6d159       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   e32b9b9f27b98       kube-apiserver-ha-999305
	
	
	==> coredns [60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75] <==
	[INFO] 10.244.2.2:39902 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000117355s
	[INFO] 10.244.1.2:47815 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090125s
	[INFO] 10.244.0.4:60010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113473s
	[INFO] 10.244.0.4:58011 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130179s
	[INFO] 10.244.0.4:42306 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008431977s
	[INFO] 10.244.0.4:37231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199786s
	[INFO] 10.244.0.4:46408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015144s
	[INFO] 10.244.2.2:44298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253661s
	[INFO] 10.244.2.2:46320 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124288s
	[INFO] 10.244.2.2:55428 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001507596s
	[INFO] 10.244.2.2:49678 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072967s
	[INFO] 10.244.1.2:50895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001783712s
	[INFO] 10.244.1.2:40165 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093772s
	[INFO] 10.244.1.2:53172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001252641s
	[INFO] 10.244.1.2:34815 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105356s
	[INFO] 10.244.1.2:37850 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213269s
	[INFO] 10.244.2.2:37470 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132796s
	[INFO] 10.244.1.2:53739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116332s
	[INFO] 10.244.1.2:49785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150432s
	[INFO] 10.244.1.2:39191 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095042s
	[INFO] 10.244.0.4:54115 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158247s
	[INFO] 10.244.2.2:54824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010194s
	[INFO] 10.244.2.2:53937 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137939s
	[INFO] 10.244.2.2:32859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135977s
	[INFO] 10.244.1.2:38346 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011678s
	
	
	==> coredns [8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59] <==
	[INFO] 10.244.0.4:57271 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004136013s
	[INFO] 10.244.0.4:41245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207213s
	[INFO] 10.244.0.4:53550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131429s
	[INFO] 10.244.2.2:43045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176233s
	[INFO] 10.244.2.2:58868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001941494s
	[INFO] 10.244.2.2:46158 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115413s
	[INFO] 10.244.2.2:48082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182529s
	[INFO] 10.244.1.2:43898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136537s
	[INFO] 10.244.1.2:41884 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111392s
	[INFO] 10.244.1.2:37393 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070881s
	[INFO] 10.244.0.4:38875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088591s
	[INFO] 10.244.0.4:39118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123769s
	[INFO] 10.244.0.4:52630 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045788s
	[INFO] 10.244.0.4:40500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041439s
	[INFO] 10.244.2.2:60125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195649s
	[INFO] 10.244.2.2:60453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126438s
	[INFO] 10.244.2.2:49851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00022498s
	[INFO] 10.244.1.2:57692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010212s
	[INFO] 10.244.0.4:59894 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230322s
	[INFO] 10.244.0.4:42506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177637s
	[INFO] 10.244.0.4:53162 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099069s
	[INFO] 10.244.2.2:44371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126437s
	[INFO] 10.244.1.2:47590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107441s
	[INFO] 10.244.1.2:44734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130206s
	[INFO] 10.244.1.2:33311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075949s
	
	
	==> describe nodes <==
	Name:               ha-999305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T14_39_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:46:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-999305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1230c1bed065421db8c3e4d5f899877a
	  System UUID:                1230c1be-d065-421d-b8c3-e4d5f899877a
	  Boot ID:                    7e7082ac-a784-4d5a-9539-9692157a7b3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2rfw6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 coredns-7db6d8ff4d-9sxgr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m22s
	  kube-system                 coredns-7db6d8ff4d-gtwxd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m22s
	  kube-system                 etcd-ha-999305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m35s
	  kube-system                 kindnet-tpffr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m23s
	  kube-system                 kube-apiserver-ha-999305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-controller-manager-ha-999305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-proxy-s2wb7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-scheduler-ha-999305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-vip-ha-999305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m22s  kube-proxy       
	  Normal  Starting                 7m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m35s  kubelet          Node ha-999305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s  kubelet          Node ha-999305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s  kubelet          Node ha-999305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m24s  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal  NodeReady                7m10s  kubelet          Node ha-999305 status is now: NodeReady
	  Normal  RegisteredNode           5m7s   node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal  RegisteredNode           3m52s  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	
	
	Name:               ha-999305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_41_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:41:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:44:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-999305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27a97bc8637c4fba94a7bb397a84b598
	  System UUID:                27a97bc8-637c-4fba-94a7-bb397a84b598
	  Boot ID:                    88201b08-f5f5-4c30-bf5f-464ac33b5a26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pcfwd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 etcd-ha-999305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m22s
	  kube-system                 kindnet-hsb9f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m24s
	  kube-system                 kube-apiserver-ha-999305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-controller-manager-ha-999305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-proxy-766sx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-scheduler-ha-999305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-vip-ha-999305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-999305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeNotReady             98s                    node-controller  Node ha-999305-m02 status is now: NodeNotReady
	
	
	Name:               ha-999305-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_42_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:46:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-999305-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c04be1041e3417f9ec04f3f6a94b977
	  System UUID:                6c04be10-41e3-417f-9ec0-4f3f6a94b977
	  Boot ID:                    5da44c63-207b-4952-951a-477e5f92088f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6kcdj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 etcd-ha-999305-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m8s
	  kube-system                 kindnet-b7lvb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m10s
	  kube-system                 kube-apiserver-ha-999305-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ha-999305-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-twh47                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-ha-999305-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-vip-ha-999305-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-999305-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	
	
	Name:               ha-999305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_43_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-999305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d4c450135c44d386a1cb39310dd813
	  System UUID:                74d4c450-135c-44d3-86a1-cb39310dd813
	  Boot ID:                    afc8d137-990f-4f0e-9995-8644e493fa47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j9gzv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m2s
	  kube-system                 kube-proxy-qqtph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-999305-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-999305-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 14:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050186] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040031] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519354] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.261259] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.592080] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.149646] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.056448] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062757] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.176758] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.118673] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.280022] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.245148] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.893793] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.060163] kauditd_printk_skb: 158 callbacks suppressed
	[Jul19 14:39] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.183971] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +6.719863] kauditd_printk_skb: 23 callbacks suppressed
	[ +19.024750] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 14:41] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50] <==
	{"level":"warn","ts":"2024-07-19T14:46:40.488487Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.163:2380/version","remote-member-id":"af1cb735ec0c662e","error":"Get \"https://192.168.39.163:2380/version\": dial tcp 192.168.39.163:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-19T14:46:40.488559Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"af1cb735ec0c662e","error":"Get \"https://192.168.39.163:2380/version\": dial tcp 192.168.39.163:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-19T14:46:40.504189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.603962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.705158Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.709245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.717949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.72559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.730963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.735333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.743152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.760444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.768006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.77256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.776968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.787311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.794524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.801521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.804067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.805381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.808433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.814525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.819627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.821492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:46:40.835195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:46:40 up 8 min,  0 users,  load average: 0.79, 0.39, 0.18
	Linux ha-999305 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d] <==
	I0719 14:46:09.898396       1 main.go:303] handling current node
	I0719 14:46:19.892616       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:46:19.892728       1 main.go:303] handling current node
	I0719 14:46:19.892757       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:46:19.892840       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:46:19.893062       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:46:19.893109       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:46:19.893286       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:46:19.893333       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:46:29.893135       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:46:29.893241       1 main.go:303] handling current node
	I0719 14:46:29.893271       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:46:29.893290       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:46:29.893484       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:46:29.893528       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:46:29.893597       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:46:29.893616       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:46:39.902041       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:46:39.902224       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:46:39.902409       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:46:39.902480       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:46:39.902595       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:46:39.902687       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:46:39.902789       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:46:39.902811       1 main.go:303] handling current node
	
	
	==> kube-apiserver [21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723] <==
	W0719 14:39:03.633749       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240]
	I0719 14:39:03.634787       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 14:39:03.639484       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 14:39:03.767631       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 14:39:05.189251       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 14:39:05.207425       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 14:39:05.349604       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 14:39:17.882309       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 14:39:17.947829       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 14:43:04.792716       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53552: use of closed network connection
	E0719 14:43:04.984394       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53572: use of closed network connection
	E0719 14:43:05.183081       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53594: use of closed network connection
	E0719 14:43:05.406412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53610: use of closed network connection
	E0719 14:43:05.590573       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53632: use of closed network connection
	E0719 14:43:05.781090       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45438: use of closed network connection
	E0719 14:43:05.968015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45452: use of closed network connection
	E0719 14:43:06.162066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45474: use of closed network connection
	E0719 14:43:06.350426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45486: use of closed network connection
	E0719 14:43:06.643771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45508: use of closed network connection
	E0719 14:43:06.830162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45522: use of closed network connection
	E0719 14:43:07.031495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45540: use of closed network connection
	E0719 14:43:07.209952       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45556: use of closed network connection
	E0719 14:43:07.380822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45576: use of closed network connection
	E0719 14:43:07.548203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45590: use of closed network connection
	W0719 14:44:33.653101       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240 192.168.39.250]
	
	
	==> kube-controller-manager [85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf] <==
	I0719 14:42:58.964691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.41µs"
	I0719 14:42:58.965497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.65µs"
	I0719 14:42:58.965993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.853µs"
	I0719 14:42:59.171041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.461714ms"
	I0719 14:42:59.252632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.957377ms"
	I0719 14:42:59.276267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.565212ms"
	I0719 14:42:59.276371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.353µs"
	I0719 14:42:59.376837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.676174ms"
	I0719 14:42:59.377108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.428µs"
	I0719 14:43:00.290301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.286µs"
	I0719 14:43:02.515412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.114416ms"
	I0719 14:43:02.515696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.239µs"
	I0719 14:43:02.786433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.235802ms"
	I0719 14:43:02.786611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.037µs"
	I0719 14:43:04.370083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.452877ms"
	I0719 14:43:04.370219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.487µs"
	E0719 14:43:38.102577       1 certificate_controller.go:146] Sync csr-m2cbg failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-m2cbg": the object has been modified; please apply your changes to the latest version and try again
	I0719 14:43:38.369779       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-999305-m04\" does not exist"
	E0719 14:43:38.374446       1 certificate_controller.go:146] Sync csr-m2cbg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-m2cbg": the object has been modified; please apply your changes to the latest version and try again
	I0719 14:43:38.404367       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-999305-m04" podCIDRs=["10.244.3.0/24"]
	I0719 14:43:42.034735       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-999305-m04"
	I0719 14:43:59.610941       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-999305-m04"
	I0719 14:45:02.078302       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-999305-m04"
	I0719 14:45:02.324143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.422779ms"
	I0719 14:45:02.324265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.854µs"
	
	
	==> kube-proxy [3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859] <==
	I0719 14:39:18.608465       1 server_linux.go:69] "Using iptables proxy"
	I0719 14:39:18.624270       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	I0719 14:39:18.670607       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 14:39:18.670721       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 14:39:18.670752       1 server_linux.go:165] "Using iptables Proxier"
	I0719 14:39:18.674486       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 14:39:18.675012       1 server.go:872] "Version info" version="v1.30.3"
	I0719 14:39:18.675061       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:39:18.679010       1 config.go:192] "Starting service config controller"
	I0719 14:39:18.679057       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 14:39:18.679097       1 config.go:101] "Starting endpoint slice config controller"
	I0719 14:39:18.679112       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 14:39:18.679404       1 config.go:319] "Starting node config controller"
	I0719 14:39:18.679431       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 14:39:18.779949       1 shared_informer.go:320] Caches are synced for node config
	I0719 14:39:18.779995       1 shared_informer.go:320] Caches are synced for service config
	I0719 14:39:18.780022       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a] <==
	W0719 14:39:03.143941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:39:03.143983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:39:03.199239       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 14:39:03.199283       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 14:39:06.302977       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 14:42:30.410490       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-twh47\": pod kube-proxy-twh47 is already assigned to node \"ha-999305-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-twh47" node="ha-999305-m03"
	E0719 14:42:30.410669       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dabe7d25-8bd8-42f8-9efd-0c800be277b3(kube-system/kube-proxy-twh47) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-twh47"
	E0719 14:42:30.410708       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-twh47\": pod kube-proxy-twh47 is already assigned to node \"ha-999305-m03\"" pod="kube-system/kube-proxy-twh47"
	I0719 14:42:30.410775       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-twh47" node="ha-999305-m03"
	E0719 14:43:38.472698       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jb992\": pod kube-proxy-jb992 is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jb992" node="ha-999305-m04"
	E0719 14:43:38.472918       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jb992\": pod kube-proxy-jb992 is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-jb992"
	E0719 14:43:38.481922       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k2hnq\": pod kindnet-k2hnq is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k2hnq" node="ha-999305-m04"
	E0719 14:43:38.482316       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 503c7a35-1ec2-49e3-b043-d756666fdefc(kube-system/kindnet-k2hnq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k2hnq"
	E0719 14:43:38.482355       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k2hnq\": pod kindnet-k2hnq is already assigned to node \"ha-999305-m04\"" pod="kube-system/kindnet-k2hnq"
	I0719 14:43:38.482384       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k2hnq" node="ha-999305-m04"
	E0719 14:43:38.610384       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fwx2c\": pod kube-proxy-fwx2c is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fwx2c" node="ha-999305-m04"
	E0719 14:43:38.610464       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8577838c-7ee2-44bf-bda8-e924f05aa0c0(kube-system/kube-proxy-fwx2c) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fwx2c"
	E0719 14:43:38.610495       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fwx2c\": pod kube-proxy-fwx2c is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-fwx2c"
	I0719 14:43:38.610518       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fwx2c" node="ha-999305-m04"
	E0719 14:43:40.343563       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rjfnq\": pod kube-proxy-rjfnq is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rjfnq" node="ha-999305-m04"
	E0719 14:43:40.343715       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rjfnq\": pod kube-proxy-rjfnq is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-rjfnq"
	E0719 14:43:40.369471       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sg9tk\": pod kube-proxy-sg9tk is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sg9tk" node="ha-999305-m04"
	E0719 14:43:40.369724       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 84ce1807-2a03-4bd0-ba20-a8230833533c(kube-system/kube-proxy-sg9tk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sg9tk"
	E0719 14:43:40.369777       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sg9tk\": pod kube-proxy-sg9tk is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-sg9tk"
	I0719 14:43:40.369852       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sg9tk" node="ha-999305-m04"
	
	
	==> kubelet <==
	Jul 19 14:42:58 ha-999305 kubelet[1369]: E0719 14:42:58.944209    1369 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-999305" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-999305' and this object
	Jul 19 14:42:59 ha-999305 kubelet[1369]: I0719 14:42:59.052275    1369 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqv22\" (UniqueName: \"kubernetes.io/projected/25cd3990-0ad4-44e2-895c-4e8c81e621af-kube-api-access-lqv22\") pod \"busybox-fc5497c4f-2rfw6\" (UID: \"25cd3990-0ad4-44e2-895c-4e8c81e621af\") " pod="default/busybox-fc5497c4f-2rfw6"
	Jul 19 14:43:00 ha-999305 kubelet[1369]: E0719 14:43:00.198830    1369 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 19 14:43:00 ha-999305 kubelet[1369]: E0719 14:43:00.199016    1369 projected.go:200] Error preparing data for projected volume kube-api-access-lqv22 for pod default/busybox-fc5497c4f-2rfw6: failed to sync configmap cache: timed out waiting for the condition
	Jul 19 14:43:00 ha-999305 kubelet[1369]: E0719 14:43:00.199205    1369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/25cd3990-0ad4-44e2-895c-4e8c81e621af-kube-api-access-lqv22 podName:25cd3990-0ad4-44e2-895c-4e8c81e621af nodeName:}" failed. No retries permitted until 2024-07-19 14:43:00.699117448 +0000 UTC m=+235.542111467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lqv22" (UniqueName: "kubernetes.io/projected/25cd3990-0ad4-44e2-895c-4e8c81e621af-kube-api-access-lqv22") pod "busybox-fc5497c4f-2rfw6" (UID: "25cd3990-0ad4-44e2-895c-4e8c81e621af") : failed to sync configmap cache: timed out waiting for the condition
	Jul 19 14:43:05 ha-999305 kubelet[1369]: E0719 14:43:05.413257    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:43:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:43:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:43:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:43:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:44:05 ha-999305 kubelet[1369]: E0719 14:44:05.412838    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:44:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:44:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:44:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:44:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:45:05 ha-999305 kubelet[1369]: E0719 14:45:05.413438    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:45:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:45:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:45:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:45:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:46:05 ha-999305 kubelet[1369]: E0719 14:46:05.412017    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:46:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:46:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:46:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:46:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-999305 -n ha-999305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-999305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (3.210563381s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:46:45.403517   27680 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:46:45.403630   27680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:45.403641   27680 out.go:304] Setting ErrFile to fd 2...
	I0719 14:46:45.403646   27680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:45.403800   27680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:46:45.403956   27680 out.go:298] Setting JSON to false
	I0719 14:46:45.403984   27680 mustload.go:65] Loading cluster: ha-999305
	I0719 14:46:45.404089   27680 notify.go:220] Checking for updates...
	I0719 14:46:45.404376   27680 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:46:45.404392   27680 status.go:255] checking status of ha-999305 ...
	I0719 14:46:45.404829   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:45.404903   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:45.425506   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I0719 14:46:45.425911   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:45.426465   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:45.426488   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:45.426914   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:45.427143   27680 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:46:45.428949   27680 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:46:45.428963   27680 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:45.429330   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:45.429374   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:45.444468   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39601
	I0719 14:46:45.444932   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:45.445405   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:45.445424   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:45.445728   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:45.445940   27680 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:46:45.448477   27680 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:45.448933   27680 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:45.448959   27680 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:45.449094   27680 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:45.449355   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:45.449385   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:45.463984   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0719 14:46:45.464325   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:45.464759   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:45.464778   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:45.465116   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:45.465278   27680 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:46:45.465459   27680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:45.465494   27680 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:46:45.468244   27680 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:45.468645   27680 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:45.468670   27680 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:45.468802   27680 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:46:45.468970   27680 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:46:45.469203   27680 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:46:45.469336   27680 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:46:45.549900   27680 ssh_runner.go:195] Run: systemctl --version
	I0719 14:46:45.557046   27680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:45.571769   27680 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:45.571794   27680 api_server.go:166] Checking apiserver status ...
	I0719 14:46:45.571843   27680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:45.585987   27680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:46:45.595856   27680 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:45.595901   27680 ssh_runner.go:195] Run: ls
	I0719 14:46:45.600446   27680 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:45.604812   27680 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:45.604848   27680 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:46:45.604862   27680 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:45.604880   27680 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:46:45.605269   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:45.605310   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:45.620847   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0719 14:46:45.621225   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:45.621650   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:45.621664   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:45.621977   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:45.622200   27680 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:46:45.623864   27680 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:46:45.623880   27680 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:45.624148   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:45.624186   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:45.638816   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42685
	I0719 14:46:45.639243   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:45.639707   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:45.639735   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:45.640043   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:45.640227   27680 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:46:45.643169   27680 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:45.643677   27680 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:45.643711   27680 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:45.643876   27680 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:45.644250   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:45.644293   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:45.658847   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0719 14:46:45.659318   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:45.659778   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:45.659798   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:45.660095   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:45.660262   27680 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:46:45.660431   27680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:45.660453   27680 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:46:45.662902   27680 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:45.663299   27680 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:45.663328   27680 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:45.663483   27680 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:46:45.663780   27680 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:46:45.663914   27680 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:46:45.664027   27680 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:46:48.210561   27680 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:46:48.210646   27680 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:46:48.210662   27680 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:48.210670   27680 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:46:48.210703   27680 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:48.210716   27680 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:46:48.211075   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:48.211115   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:48.225889   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0719 14:46:48.226347   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:48.226813   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:48.226837   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:48.227112   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:48.227275   27680 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:46:48.229203   27680 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:46:48.229224   27680 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:46:48.229577   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:48.229615   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:48.244267   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I0719 14:46:48.244702   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:48.245174   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:48.245194   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:48.245496   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:48.245690   27680 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:46:48.248441   27680 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:48.248996   27680 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:46:48.249030   27680 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:48.249195   27680 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:46:48.249558   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:48.249599   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:48.264124   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37949
	I0719 14:46:48.264617   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:48.265154   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:48.265180   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:48.265483   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:48.265689   27680 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:46:48.265868   27680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:48.265891   27680 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:46:48.268754   27680 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:48.269214   27680 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:46:48.269238   27680 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:48.269387   27680 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:46:48.269549   27680 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:46:48.269672   27680 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:46:48.269826   27680 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:46:48.354182   27680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:48.371417   27680 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:48.371443   27680 api_server.go:166] Checking apiserver status ...
	I0719 14:46:48.371483   27680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:48.388580   27680 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:46:48.402482   27680 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:48.402549   27680 ssh_runner.go:195] Run: ls
	I0719 14:46:48.407223   27680 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:48.413595   27680 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:48.413622   27680 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:46:48.413632   27680 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:48.413647   27680 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:46:48.414044   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:48.414094   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:48.429647   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0719 14:46:48.430080   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:48.430669   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:48.430688   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:48.431000   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:48.431279   27680 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:46:48.432867   27680 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:46:48.432882   27680 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:46:48.433174   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:48.433211   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:48.447520   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I0719 14:46:48.447918   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:48.448365   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:48.448383   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:48.448703   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:48.448882   27680 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:46:48.451805   27680 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:48.452217   27680 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:46:48.452257   27680 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:48.452481   27680 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:46:48.452798   27680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:48.452839   27680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:48.470070   27680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43037
	I0719 14:46:48.470519   27680 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:48.471055   27680 main.go:141] libmachine: Using API Version  1
	I0719 14:46:48.471097   27680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:48.471387   27680 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:48.471565   27680 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:46:48.471798   27680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:48.471822   27680 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:46:48.474777   27680 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:48.475287   27680 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:46:48.475314   27680 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:48.475455   27680 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:46:48.475638   27680 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:46:48.475797   27680 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:46:48.475946   27680 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:46:48.558096   27680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:48.573683   27680 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (5.454817085s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:46:49.294476   27780 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:46:49.294588   27780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:49.294597   27780 out.go:304] Setting ErrFile to fd 2...
	I0719 14:46:49.294604   27780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:49.294779   27780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:46:49.294959   27780 out.go:298] Setting JSON to false
	I0719 14:46:49.294992   27780 mustload.go:65] Loading cluster: ha-999305
	I0719 14:46:49.295112   27780 notify.go:220] Checking for updates...
	I0719 14:46:49.295390   27780 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:46:49.295407   27780 status.go:255] checking status of ha-999305 ...
	I0719 14:46:49.295756   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:49.295818   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:49.314038   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I0719 14:46:49.314449   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:49.315032   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:49.315056   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:49.315418   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:49.315597   27780 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:46:49.316993   27780 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:46:49.317012   27780 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:49.317341   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:49.317382   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:49.332399   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42585
	I0719 14:46:49.332708   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:49.333128   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:49.333145   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:49.333471   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:49.333671   27780 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:46:49.336262   27780 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:49.336677   27780 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:49.336713   27780 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:49.336811   27780 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:49.337108   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:49.337149   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:49.351255   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I0719 14:46:49.351591   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:49.352014   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:49.352033   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:49.352328   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:49.352493   27780 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:46:49.352671   27780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:49.352700   27780 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:46:49.355230   27780 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:49.355569   27780 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:49.355605   27780 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:49.355734   27780 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:46:49.355927   27780 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:46:49.356064   27780 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:46:49.356217   27780 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:46:49.442671   27780 ssh_runner.go:195] Run: systemctl --version
	I0719 14:46:49.448817   27780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:49.462551   27780 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:49.462576   27780 api_server.go:166] Checking apiserver status ...
	I0719 14:46:49.462606   27780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:49.477121   27780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:46:49.486254   27780 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:49.486314   27780 ssh_runner.go:195] Run: ls
	I0719 14:46:49.491287   27780 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:49.497184   27780 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:49.497208   27780 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:46:49.497219   27780 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:49.497236   27780 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:46:49.497557   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:49.497591   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:49.512030   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I0719 14:46:49.512533   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:49.513001   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:49.513016   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:49.513281   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:49.513457   27780 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:46:49.514982   27780 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:46:49.515001   27780 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:49.515370   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:49.515413   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:49.529274   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0719 14:46:49.529599   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:49.530027   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:49.530045   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:49.530341   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:49.530507   27780 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:46:49.533042   27780 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:49.533480   27780 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:49.533505   27780 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:49.533646   27780 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:49.534040   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:49.534097   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:49.549198   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0719 14:46:49.549580   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:49.550018   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:49.550035   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:49.550370   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:49.550572   27780 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:46:49.550762   27780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:49.550779   27780 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:46:49.553314   27780 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:49.553722   27780 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:49.553739   27780 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:49.553925   27780 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:46:49.554065   27780 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:46:49.554179   27780 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:46:49.554306   27780 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:46:51.282568   27780 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:51.282624   27780 retry.go:31] will retry after 130.833672ms: dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:46:54.354490   27780 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:46:54.354585   27780 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:46:54.354606   27780 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:54.354613   27780 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:46:54.354628   27780 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:54.354638   27780 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:46:54.354930   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:54.354963   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:54.369604   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43887
	I0719 14:46:54.370101   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:54.370628   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:54.370650   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:54.371016   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:54.371213   27780 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:46:54.372800   27780 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:46:54.372820   27780 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:46:54.373128   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:54.373172   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:54.387586   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41461
	I0719 14:46:54.387941   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:54.388342   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:54.388365   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:54.388674   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:54.388882   27780 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:46:54.391461   27780 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:54.391879   27780 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:46:54.391899   27780 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:54.392030   27780 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:46:54.392438   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:54.392489   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:54.406872   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0719 14:46:54.407330   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:54.407889   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:54.407915   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:54.408205   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:54.408385   27780 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:46:54.408580   27780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:54.408601   27780 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:46:54.411210   27780 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:54.411648   27780 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:46:54.411678   27780 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:46:54.411794   27780 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:46:54.411948   27780 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:46:54.412057   27780 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:46:54.412185   27780 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:46:54.495518   27780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:54.517431   27780 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:54.517456   27780 api_server.go:166] Checking apiserver status ...
	I0719 14:46:54.517499   27780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:54.532798   27780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:46:54.542106   27780 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:54.542154   27780 ssh_runner.go:195] Run: ls
	I0719 14:46:54.546501   27780 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:54.553043   27780 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:54.553065   27780 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:46:54.553076   27780 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:54.553095   27780 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:46:54.553756   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:54.553802   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:54.569620   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33467
	I0719 14:46:54.570079   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:54.570573   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:54.570599   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:54.570915   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:54.571113   27780 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:46:54.572596   27780 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:46:54.572613   27780 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:46:54.572982   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:54.573022   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:54.588018   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0719 14:46:54.588388   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:54.588802   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:54.588824   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:54.589144   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:54.589325   27780 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:46:54.592152   27780 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:54.592544   27780 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:46:54.592568   27780 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:54.592714   27780 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:46:54.593007   27780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:54.593036   27780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:54.607477   27780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0719 14:46:54.607818   27780 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:54.608271   27780 main.go:141] libmachine: Using API Version  1
	I0719 14:46:54.608292   27780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:54.608656   27780 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:54.608865   27780 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:46:54.609006   27780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:54.609033   27780 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:46:54.611639   27780 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:54.612087   27780 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:46:54.612122   27780 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:46:54.612356   27780 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:46:54.612531   27780 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:46:54.612728   27780 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:46:54.612985   27780 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:46:54.693501   27780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:54.707795   27780 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (5.042087693s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:46:55.852847   27881 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:46:55.852967   27881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:55.852975   27881 out.go:304] Setting ErrFile to fd 2...
	I0719 14:46:55.852979   27881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:46:55.853145   27881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:46:55.853290   27881 out.go:298] Setting JSON to false
	I0719 14:46:55.853314   27881 mustload.go:65] Loading cluster: ha-999305
	I0719 14:46:55.853435   27881 notify.go:220] Checking for updates...
	I0719 14:46:55.853671   27881 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:46:55.853687   27881 status.go:255] checking status of ha-999305 ...
	I0719 14:46:55.854062   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:55.854108   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:55.868756   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I0719 14:46:55.869211   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:55.869732   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:46:55.869746   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:55.870139   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:55.870348   27881 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:46:55.872007   27881 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:46:55.872023   27881 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:55.872293   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:55.872331   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:55.886983   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36583
	I0719 14:46:55.887296   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:55.887788   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:46:55.887816   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:55.888138   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:55.888332   27881 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:46:55.891084   27881 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:55.891567   27881 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:55.891600   27881 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:55.891751   27881 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:46:55.892016   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:55.892047   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:55.905852   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0719 14:46:55.906290   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:55.906787   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:46:55.906822   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:55.907149   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:55.907325   27881 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:46:55.907506   27881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:55.907527   27881 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:46:55.910044   27881 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:55.910486   27881 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:46:55.910510   27881 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:46:55.910688   27881 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:46:55.910892   27881 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:46:55.911032   27881 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:46:55.911160   27881 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:46:55.993982   27881 ssh_runner.go:195] Run: systemctl --version
	I0719 14:46:56.001532   27881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:46:56.021060   27881 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:46:56.021089   27881 api_server.go:166] Checking apiserver status ...
	I0719 14:46:56.021138   27881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:46:56.043264   27881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:46:56.053267   27881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:46:56.053313   27881 ssh_runner.go:195] Run: ls
	I0719 14:46:56.057488   27881 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:46:56.063507   27881 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:46:56.063528   27881 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:46:56.063537   27881 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:46:56.063552   27881 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:46:56.063847   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:56.063894   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:56.078429   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0719 14:46:56.078780   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:56.079218   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:46:56.079238   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:56.079501   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:56.079713   27881 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:46:56.081226   27881 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:46:56.081251   27881 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:56.081647   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:56.081683   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:56.096536   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44979
	I0719 14:46:56.096908   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:56.097359   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:46:56.097381   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:56.097674   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:56.097844   27881 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:46:56.100536   27881 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:56.100923   27881 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:56.100962   27881 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:56.101081   27881 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:46:56.101461   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:46:56.101502   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:46:56.115416   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35829
	I0719 14:46:56.115738   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:46:56.116160   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:46:56.116179   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:46:56.116493   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:46:56.116672   27881 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:46:56.116821   27881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:46:56.116839   27881 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:46:56.119578   27881 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:56.120023   27881 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:46:56.120048   27881 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:46:56.120234   27881 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:46:56.120383   27881 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:46:56.120483   27881 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:46:56.120606   27881 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:46:57.426551   27881 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:46:57.426598   27881 retry.go:31] will retry after 169.761485ms: dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:47:00.502556   27881 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:47:00.502658   27881 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:47:00.502687   27881 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:00.502695   27881 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:47:00.502715   27881 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:00.502724   27881 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:47:00.503018   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:00.503064   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:00.517427   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36977
	I0719 14:47:00.517802   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:00.518257   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:47:00.518280   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:00.518592   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:00.518778   27881 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:00.520322   27881 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:47:00.520340   27881 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:00.520632   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:00.520663   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:00.535053   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I0719 14:47:00.535442   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:00.535816   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:47:00.535836   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:00.536105   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:00.536246   27881 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:47:00.539592   27881 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:00.540036   27881 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:00.540058   27881 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:00.540245   27881 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:00.540557   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:00.540598   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:00.556228   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34317
	I0719 14:47:00.556709   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:00.557283   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:47:00.557307   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:00.557684   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:00.557837   27881 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:00.558030   27881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:00.558048   27881 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:00.560488   27881 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:00.560954   27881 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:00.560978   27881 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:00.561056   27881 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:00.561249   27881 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:00.561414   27881 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:00.561616   27881 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:00.646412   27881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:00.663555   27881 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:00.663591   27881 api_server.go:166] Checking apiserver status ...
	I0719 14:47:00.663628   27881 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:00.678972   27881 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:47:00.689486   27881 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:00.689527   27881 ssh_runner.go:195] Run: ls
	I0719 14:47:00.694170   27881 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:00.698484   27881 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:00.698503   27881 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:47:00.698510   27881 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:00.698524   27881 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:47:00.698799   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:00.698834   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:00.714143   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0719 14:47:00.714617   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:00.715052   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:47:00.715069   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:00.715302   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:00.715455   27881 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:00.717062   27881 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:47:00.717076   27881 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:00.717350   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:00.717382   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:00.731445   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37761
	I0719 14:47:00.731783   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:00.732200   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:47:00.732215   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:00.732485   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:00.732652   27881 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:47:00.735329   27881 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:00.735770   27881 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:00.735804   27881 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:00.735949   27881 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:00.736234   27881 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:00.736268   27881 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:00.750171   27881 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 14:47:00.750545   27881 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:00.750964   27881 main.go:141] libmachine: Using API Version  1
	I0719 14:47:00.750984   27881 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:00.751255   27881 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:00.751542   27881 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:00.751710   27881 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:00.751726   27881 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:00.754568   27881 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:00.754979   27881 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:00.755005   27881 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:00.755160   27881 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:00.755315   27881 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:00.755455   27881 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:00.755566   27881 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:00.837902   27881 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:00.852551   27881 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (3.749062358s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:47:03.456559   27999 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:47:03.456833   27999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:03.456844   27999 out.go:304] Setting ErrFile to fd 2...
	I0719 14:47:03.456850   27999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:03.457055   27999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:47:03.457234   27999 out.go:298] Setting JSON to false
	I0719 14:47:03.457268   27999 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:03.457330   27999 notify.go:220] Checking for updates...
	I0719 14:47:03.457683   27999 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:03.457700   27999 status.go:255] checking status of ha-999305 ...
	I0719 14:47:03.458079   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:03.458147   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:03.477698   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45625
	I0719 14:47:03.478258   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:03.478879   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:03.478908   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:03.479363   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:03.479584   27999 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:47:03.481296   27999 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:47:03.481314   27999 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:03.481661   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:03.481710   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:03.498073   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I0719 14:47:03.498474   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:03.498939   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:03.498963   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:03.499290   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:03.499468   27999 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:47:03.502255   27999 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:03.502688   27999 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:03.502717   27999 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:03.502869   27999 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:03.503253   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:03.503301   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:03.519078   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33427
	I0719 14:47:03.519443   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:03.519869   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:03.519897   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:03.520207   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:03.520425   27999 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:47:03.520665   27999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:03.520700   27999 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:47:03.523474   27999 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:03.523858   27999 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:03.523882   27999 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:03.523987   27999 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:47:03.524147   27999 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:47:03.524290   27999 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:47:03.524419   27999 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:47:03.605883   27999 ssh_runner.go:195] Run: systemctl --version
	I0719 14:47:03.612420   27999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:03.628140   27999 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:03.628170   27999 api_server.go:166] Checking apiserver status ...
	I0719 14:47:03.628216   27999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:03.645312   27999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:47:03.655712   27999 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:03.655771   27999 ssh_runner.go:195] Run: ls
	I0719 14:47:03.660316   27999 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:03.664619   27999 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:03.664642   27999 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:47:03.664652   27999 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:03.664674   27999 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:47:03.664946   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:03.664978   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:03.680831   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32791
	I0719 14:47:03.681338   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:03.681858   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:03.681891   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:03.682199   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:03.682440   27999 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:47:03.684324   27999 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:47:03.684340   27999 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:47:03.684757   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:03.684801   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:03.699755   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I0719 14:47:03.700142   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:03.700614   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:03.700639   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:03.700954   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:03.701161   27999 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:47:03.703774   27999 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:03.704152   27999 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:47:03.704181   27999 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:03.704248   27999 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:47:03.704552   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:03.704586   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:03.719081   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0719 14:47:03.719425   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:03.719872   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:03.719891   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:03.720181   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:03.720384   27999 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:47:03.720569   27999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:03.720588   27999 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:47:03.723487   27999 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:03.724004   27999 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:47:03.724030   27999 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:03.724214   27999 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:47:03.724391   27999 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:47:03.724577   27999 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:47:03.724795   27999 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:47:06.802504   27999 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:47:06.802595   27999 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:47:06.802614   27999 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:06.802623   27999 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:47:06.802640   27999 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:06.802657   27999 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:47:06.802960   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:06.802995   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:06.818132   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0719 14:47:06.818552   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:06.819005   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:06.819026   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:06.819305   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:06.819558   27999 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:06.821173   27999 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:47:06.821199   27999 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:06.821499   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:06.821532   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:06.835954   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0719 14:47:06.836344   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:06.836813   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:06.836834   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:06.837243   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:06.837450   27999 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:47:06.840398   27999 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:06.840862   27999 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:06.840901   27999 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:06.841036   27999 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:06.841363   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:06.841405   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:06.856648   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I0719 14:47:06.857069   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:06.857528   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:06.857550   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:06.857855   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:06.858071   27999 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:06.858253   27999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:06.858277   27999 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:06.861381   27999 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:06.861826   27999 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:06.861864   27999 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:06.862042   27999 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:06.862217   27999 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:06.862408   27999 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:06.862570   27999 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:06.952972   27999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:06.969439   27999 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:06.969467   27999 api_server.go:166] Checking apiserver status ...
	I0719 14:47:06.969516   27999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:06.984210   27999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:47:06.994279   27999 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:06.994347   27999 ssh_runner.go:195] Run: ls
	I0719 14:47:06.999629   27999 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:07.005801   27999 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:07.005832   27999 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:47:07.005844   27999 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:07.005863   27999 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:47:07.006256   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:07.006303   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:07.021119   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 14:47:07.021542   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:07.022046   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:07.022061   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:07.022407   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:07.022636   27999 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:07.024259   27999 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:47:07.024286   27999 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:07.024637   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:07.024688   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:07.039960   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0719 14:47:07.040488   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:07.040989   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:07.041027   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:07.041377   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:07.041570   27999 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:47:07.044177   27999 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:07.044549   27999 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:07.044585   27999 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:07.044687   27999 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:07.044976   27999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:07.045019   27999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:07.060233   27999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0719 14:47:07.060598   27999 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:07.061029   27999 main.go:141] libmachine: Using API Version  1
	I0719 14:47:07.061050   27999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:07.061417   27999 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:07.061602   27999 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:07.061817   27999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:07.061836   27999 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:07.064337   27999 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:07.064747   27999 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:07.064774   27999 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:07.064926   27999 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:07.065105   27999 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:07.065260   27999 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:07.065391   27999 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:07.146321   27999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:07.161305   27999 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (4.692196301s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:47:09.007826   28099 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:47:09.007942   28099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:09.007951   28099 out.go:304] Setting ErrFile to fd 2...
	I0719 14:47:09.007955   28099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:09.008159   28099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:47:09.008331   28099 out.go:298] Setting JSON to false
	I0719 14:47:09.008359   28099 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:09.008487   28099 notify.go:220] Checking for updates...
	I0719 14:47:09.008740   28099 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:09.008756   28099 status.go:255] checking status of ha-999305 ...
	I0719 14:47:09.009229   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:09.009300   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:09.027215   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0719 14:47:09.027653   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:09.028396   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:09.028431   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:09.028762   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:09.028964   28099 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:47:09.030803   28099 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:47:09.030836   28099 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:09.031174   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:09.031246   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:09.046174   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37231
	I0719 14:47:09.046701   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:09.047199   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:09.047223   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:09.047587   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:09.047779   28099 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:47:09.050683   28099 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:09.051081   28099 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:09.051107   28099 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:09.051266   28099 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:09.051553   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:09.051588   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:09.066446   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I0719 14:47:09.066884   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:09.067364   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:09.067392   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:09.067747   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:09.067973   28099 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:47:09.068177   28099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:09.068204   28099 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:47:09.071060   28099 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:09.071476   28099 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:09.071509   28099 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:09.071648   28099 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:47:09.071842   28099 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:47:09.072010   28099 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:47:09.072164   28099 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:47:09.154180   28099 ssh_runner.go:195] Run: systemctl --version
	I0719 14:47:09.161450   28099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:09.177060   28099 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:09.177094   28099 api_server.go:166] Checking apiserver status ...
	I0719 14:47:09.177137   28099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:09.195769   28099 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:47:09.207805   28099 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:09.207882   28099 ssh_runner.go:195] Run: ls
	I0719 14:47:09.213084   28099 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:09.219829   28099 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:09.219861   28099 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:47:09.219871   28099 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:09.219887   28099 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:47:09.220164   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:09.220200   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:09.236051   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0719 14:47:09.236483   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:09.236974   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:09.236999   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:09.237283   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:09.237436   28099 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:47:09.238891   28099 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:47:09.238910   28099 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:47:09.239206   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:09.239239   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:09.254581   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0719 14:47:09.255025   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:09.255506   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:09.255525   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:09.256067   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:09.256263   28099 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:47:09.259410   28099 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:09.259810   28099 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:47:09.259845   28099 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:09.259987   28099 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:47:09.260261   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:09.260299   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:09.276167   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0719 14:47:09.276606   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:09.277106   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:09.277127   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:09.277476   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:09.277665   28099 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:47:09.277828   28099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:09.277844   28099 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:47:09.280671   28099 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:09.281162   28099 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:47:09.281189   28099 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:09.281359   28099 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:47:09.281533   28099 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:47:09.281682   28099 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:47:09.281845   28099 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:47:09.874548   28099 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:09.874610   28099 retry.go:31] will retry after 363.440093ms: dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:47:13.302606   28099 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:47:13.302699   28099 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:47:13.302720   28099 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:13.302748   28099 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:47:13.302781   28099 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:13.302792   28099 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:47:13.303082   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:13.303122   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:13.319589   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0719 14:47:13.320063   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:13.320518   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:13.320542   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:13.320821   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:13.321041   28099 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:13.322872   28099 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:47:13.322891   28099 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:13.323253   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:13.323303   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:13.338816   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0719 14:47:13.339256   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:13.339727   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:13.339749   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:13.340082   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:13.340294   28099 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:47:13.343034   28099 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:13.343427   28099 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:13.343455   28099 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:13.343590   28099 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:13.343910   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:13.343954   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:13.360383   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34599
	I0719 14:47:13.360842   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:13.361336   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:13.361356   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:13.361652   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:13.361821   28099 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:13.362028   28099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:13.362048   28099 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:13.364871   28099 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:13.365353   28099 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:13.365390   28099 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:13.365595   28099 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:13.365761   28099 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:13.365929   28099 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:13.366059   28099 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:13.450305   28099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:13.465303   28099 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:13.465335   28099 api_server.go:166] Checking apiserver status ...
	I0719 14:47:13.465374   28099 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:13.480960   28099 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:47:13.491910   28099 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:13.491971   28099 ssh_runner.go:195] Run: ls
	I0719 14:47:13.496098   28099 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:13.502613   28099 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:13.502633   28099 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:47:13.502640   28099 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:13.502667   28099 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:47:13.502948   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:13.502979   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:13.517422   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0719 14:47:13.517833   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:13.518227   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:13.518271   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:13.518603   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:13.518772   28099 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:13.520279   28099 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:47:13.520298   28099 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:13.520564   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:13.520600   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:13.535339   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0719 14:47:13.535743   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:13.536204   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:13.536228   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:13.536539   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:13.536715   28099 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:47:13.539717   28099 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:13.540178   28099 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:13.540207   28099 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:13.540312   28099 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:13.540615   28099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:13.540648   28099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:13.555567   28099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40235
	I0719 14:47:13.555988   28099 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:13.556492   28099 main.go:141] libmachine: Using API Version  1
	I0719 14:47:13.556519   28099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:13.556814   28099 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:13.557028   28099 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:13.557189   28099 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:13.557205   28099 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:13.559868   28099 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:13.560277   28099 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:13.560313   28099 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:13.560487   28099 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:13.560712   28099 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:13.560883   28099 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:13.561021   28099 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:13.641900   28099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:13.657329   28099 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (3.732515562s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:47:18.561761   28216 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:47:18.562018   28216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:18.562028   28216 out.go:304] Setting ErrFile to fd 2...
	I0719 14:47:18.562032   28216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:18.562191   28216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:47:18.562378   28216 out.go:298] Setting JSON to false
	I0719 14:47:18.562405   28216 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:18.562438   28216 notify.go:220] Checking for updates...
	I0719 14:47:18.562834   28216 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:18.562849   28216 status.go:255] checking status of ha-999305 ...
	I0719 14:47:18.563353   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:18.563406   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:18.582137   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0719 14:47:18.582518   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:18.583129   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:18.583158   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:18.583489   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:18.583660   28216 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:47:18.585175   28216 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:47:18.585203   28216 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:18.585590   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:18.585636   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:18.601405   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35435
	I0719 14:47:18.601801   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:18.602360   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:18.602386   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:18.602738   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:18.602928   28216 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:47:18.605974   28216 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:18.606520   28216 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:18.606554   28216 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:18.606706   28216 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:18.606987   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:18.607016   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:18.621679   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0719 14:47:18.622218   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:18.622754   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:18.622776   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:18.623083   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:18.623265   28216 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:47:18.623435   28216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:18.623470   28216 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:47:18.626271   28216 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:18.626693   28216 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:18.626711   28216 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:18.626870   28216 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:47:18.627057   28216 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:47:18.627202   28216 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:47:18.627386   28216 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:47:18.717013   28216 ssh_runner.go:195] Run: systemctl --version
	I0719 14:47:18.727292   28216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:18.744964   28216 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:18.744995   28216 api_server.go:166] Checking apiserver status ...
	I0719 14:47:18.745036   28216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:18.759953   28216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:47:18.769600   28216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:18.769666   28216 ssh_runner.go:195] Run: ls
	I0719 14:47:18.774190   28216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:18.778202   28216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:18.778230   28216 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:47:18.778256   28216 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:18.778296   28216 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:47:18.778607   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:18.778652   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:18.793964   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35773
	I0719 14:47:18.794462   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:18.794963   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:18.794984   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:18.795252   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:18.795413   28216 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:47:18.796934   28216 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:47:18.796952   28216 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:47:18.797249   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:18.797281   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:18.812117   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32951
	I0719 14:47:18.812569   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:18.813022   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:18.813054   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:18.813361   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:18.813534   28216 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:47:18.816581   28216 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:18.817040   28216 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:47:18.817065   28216 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:18.817393   28216 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:47:18.817737   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:18.817782   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:18.832567   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I0719 14:47:18.832933   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:18.833402   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:18.833422   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:18.833843   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:18.834042   28216 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:47:18.834226   28216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:18.834264   28216 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:47:18.836927   28216 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:18.837338   28216 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:47:18.837372   28216 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:47:18.837631   28216 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:47:18.837837   28216 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:47:18.838003   28216 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:47:18.838152   28216 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	W0719 14:47:21.906557   28216 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.163:22: connect: no route to host
	W0719 14:47:21.906650   28216 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	E0719 14:47:21.906672   28216 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:21.906684   28216 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:47:21.906710   28216 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.163:22: connect: no route to host
	I0719 14:47:21.906720   28216 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:47:21.907050   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:21.907136   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:21.922837   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I0719 14:47:21.923320   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:21.923870   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:21.923891   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:21.924182   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:21.924357   28216 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:21.926165   28216 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:47:21.926182   28216 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:21.926526   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:21.926567   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:21.941082   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34505
	I0719 14:47:21.941505   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:21.942044   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:21.942064   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:21.942366   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:21.942546   28216 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:47:21.945641   28216 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:21.946136   28216 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:21.946161   28216 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:21.946357   28216 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:21.946686   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:21.946719   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:21.960784   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0719 14:47:21.961301   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:21.961761   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:21.961784   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:21.962075   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:21.962265   28216 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:21.962465   28216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:21.962485   28216 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:21.964982   28216 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:21.966651   28216 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:21.966674   28216 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:21.966828   28216 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:21.966988   28216 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:21.967166   28216 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:21.967447   28216 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:22.049697   28216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:22.064569   28216 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:22.064615   28216 api_server.go:166] Checking apiserver status ...
	I0719 14:47:22.064644   28216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:22.078297   28216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:47:22.087746   28216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:22.087791   28216 ssh_runner.go:195] Run: ls
	I0719 14:47:22.092228   28216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:22.096398   28216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:22.096416   28216 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:47:22.096423   28216 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:22.096436   28216 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:47:22.096709   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:22.096737   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:22.111727   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0719 14:47:22.112204   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:22.112806   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:22.112830   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:22.113122   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:22.113322   28216 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:22.115259   28216 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:47:22.115274   28216 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:22.115646   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:22.115691   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:22.131042   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0719 14:47:22.131464   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:22.131991   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:22.132037   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:22.132371   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:22.132559   28216 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:47:22.135368   28216 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:22.135835   28216 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:22.135878   28216 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:22.136002   28216 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:22.136310   28216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:22.136348   28216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:22.151456   28216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0719 14:47:22.151869   28216 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:22.152333   28216 main.go:141] libmachine: Using API Version  1
	I0719 14:47:22.152353   28216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:22.152693   28216 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:22.152873   28216 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:22.153058   28216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:22.153077   28216 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:22.155915   28216 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:22.156368   28216 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:22.156403   28216 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:22.156560   28216 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:22.156800   28216 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:22.156990   28216 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:22.157134   28216 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:22.237326   28216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:22.252603   28216 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 7 (602.437753ms)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:47:27.455719   28336 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:47:27.456061   28336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:27.456074   28336 out.go:304] Setting ErrFile to fd 2...
	I0719 14:47:27.456082   28336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:27.456303   28336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:47:27.456491   28336 out.go:298] Setting JSON to false
	I0719 14:47:27.456529   28336 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:27.456623   28336 notify.go:220] Checking for updates...
	I0719 14:47:27.456988   28336 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:27.457008   28336 status.go:255] checking status of ha-999305 ...
	I0719 14:47:27.457549   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.457600   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.473589   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32989
	I0719 14:47:27.474060   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.474733   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.474756   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.475098   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.475280   28336 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:47:27.476935   28336 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:47:27.476951   28336 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:27.477239   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.477282   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.492666   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0719 14:47:27.493043   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.493439   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.493466   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.493732   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.493894   28336 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:47:27.496496   28336 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:27.496905   28336 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:27.496937   28336 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:27.497037   28336 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:27.497372   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.497412   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.511505   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35459
	I0719 14:47:27.511854   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.512281   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.512299   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.512623   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.512776   28336 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:47:27.512952   28336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:27.512985   28336 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:47:27.515441   28336 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:27.515788   28336 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:27.515833   28336 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:27.515943   28336 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:47:27.516099   28336 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:47:27.516195   28336 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:47:27.516304   28336 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:47:27.598396   28336 ssh_runner.go:195] Run: systemctl --version
	I0719 14:47:27.604366   28336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:27.619167   28336 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:27.619195   28336 api_server.go:166] Checking apiserver status ...
	I0719 14:47:27.619245   28336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:27.633370   28336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:47:27.642637   28336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:27.642694   28336 ssh_runner.go:195] Run: ls
	I0719 14:47:27.647185   28336 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:27.653906   28336 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:27.653932   28336 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:47:27.653942   28336 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:27.653962   28336 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:47:27.654316   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.654358   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.669680   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I0719 14:47:27.670053   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.670638   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.670667   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.671008   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.671189   28336 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:47:27.672582   28336 status.go:330] ha-999305-m02 host status = "Stopped" (err=<nil>)
	I0719 14:47:27.672614   28336 status.go:343] host is not running, skipping remaining checks
	I0719 14:47:27.672622   28336 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:27.672637   28336 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:47:27.672927   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.672958   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.687947   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0719 14:47:27.688339   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.688789   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.688810   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.689118   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.689353   28336 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:27.690905   28336 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:47:27.690919   28336 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:27.691259   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.691300   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.705966   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I0719 14:47:27.706413   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.706830   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.706849   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.707200   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.707349   28336 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:47:27.710262   28336 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:27.710681   28336 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:27.710708   28336 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:27.710881   28336 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:27.711182   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.711232   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.725512   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0719 14:47:27.725943   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.726360   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.726378   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.726702   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.726894   28336 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:27.727073   28336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:27.727093   28336 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:27.729780   28336 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:27.730227   28336 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:27.730264   28336 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:27.730436   28336 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:27.730629   28336 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:27.730804   28336 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:27.730948   28336 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:27.810333   28336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:27.826948   28336 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:27.826975   28336 api_server.go:166] Checking apiserver status ...
	I0719 14:47:27.827008   28336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:27.840904   28336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:47:27.850128   28336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:27.850174   28336 ssh_runner.go:195] Run: ls
	I0719 14:47:27.854748   28336 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:27.858974   28336 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:27.858998   28336 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:47:27.859009   28336 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:27.859026   28336 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:47:27.859411   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.859457   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.874871   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45041
	I0719 14:47:27.875267   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.875755   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.875779   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.876089   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.876280   28336 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:27.878012   28336 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:47:27.878024   28336 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:27.878303   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.878341   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.893833   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0719 14:47:27.894217   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.894824   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.894847   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.895243   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.895487   28336 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:47:27.898344   28336 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:27.898840   28336 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:27.898859   28336 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:27.899106   28336 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:27.899507   28336 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:27.899570   28336 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:27.914355   28336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0719 14:47:27.914795   28336 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:27.915308   28336 main.go:141] libmachine: Using API Version  1
	I0719 14:47:27.915335   28336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:27.915726   28336 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:27.915951   28336 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:27.916136   28336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:27.916155   28336 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:27.919122   28336 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:27.919508   28336 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:27.919546   28336 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:27.919694   28336 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:27.919836   28336 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:27.919995   28336 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:27.920106   28336 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:28.001443   28336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:28.016331   28336 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0719 14:47:29.032379   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 7 (633.931016ms)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-999305-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:47:34.902921   28441 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:47:34.903015   28441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:34.903019   28441 out.go:304] Setting ErrFile to fd 2...
	I0719 14:47:34.903023   28441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:34.903216   28441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:47:34.903408   28441 out.go:298] Setting JSON to false
	I0719 14:47:34.903437   28441 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:34.903540   28441 notify.go:220] Checking for updates...
	I0719 14:47:34.903864   28441 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:34.903883   28441 status.go:255] checking status of ha-999305 ...
	I0719 14:47:34.904278   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:34.904329   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:34.924358   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34079
	I0719 14:47:34.924771   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:34.925442   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:34.925470   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:34.925769   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:34.925990   28441 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:47:34.927677   28441 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:47:34.927694   28441 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:34.927995   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:34.928029   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:34.944824   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0719 14:47:34.945152   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:34.945586   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:34.945605   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:34.945888   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:34.946091   28441 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:47:34.948978   28441 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:34.949384   28441 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:34.949417   28441 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:34.949550   28441 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:47:34.949833   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:34.949866   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:34.964019   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0719 14:47:34.964360   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:34.964758   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:34.964792   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:34.965121   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:34.965296   28441 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:47:34.965475   28441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:34.965501   28441 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:47:34.968461   28441 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:34.968928   28441 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:47:34.968955   28441 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:47:34.969093   28441 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:47:34.969253   28441 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:47:34.969390   28441 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:47:34.969501   28441 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:47:35.050048   28441 ssh_runner.go:195] Run: systemctl --version
	I0719 14:47:35.057008   28441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:35.074523   28441 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:35.074555   28441 api_server.go:166] Checking apiserver status ...
	I0719 14:47:35.074596   28441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:35.090935   28441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup
	W0719 14:47:35.101102   28441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1135/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:35.101171   28441 ssh_runner.go:195] Run: ls
	I0719 14:47:35.106911   28441 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:35.113054   28441 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:35.113075   28441 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:47:35.113084   28441 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:35.113098   28441 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:47:35.113379   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.113424   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.129281   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0719 14:47:35.129730   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.130291   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.130316   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.130632   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.130829   28441 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:47:35.132530   28441 status.go:330] ha-999305-m02 host status = "Stopped" (err=<nil>)
	I0719 14:47:35.132547   28441 status.go:343] host is not running, skipping remaining checks
	I0719 14:47:35.132555   28441 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:35.132578   28441 status.go:255] checking status of ha-999305-m03 ...
	I0719 14:47:35.132909   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.132956   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.148389   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I0719 14:47:35.148799   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.149255   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.149274   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.149562   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.149766   28441 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:35.151311   28441 status.go:330] ha-999305-m03 host status = "Running" (err=<nil>)
	I0719 14:47:35.151327   28441 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:35.151604   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.151641   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.165503   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41705
	I0719 14:47:35.165980   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.166465   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.166489   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.166795   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.166989   28441 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:47:35.169362   28441 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:35.169767   28441 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:35.169799   28441 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:35.169882   28441 host.go:66] Checking if "ha-999305-m03" exists ...
	I0719 14:47:35.170223   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.170282   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.185060   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0719 14:47:35.185535   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.186128   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.186158   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.186468   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.186892   28441 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:35.187071   28441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:35.187090   28441 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:35.189888   28441 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:35.190292   28441 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:35.190320   28441 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:35.190590   28441 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:35.190797   28441 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:35.190987   28441 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:35.191119   28441 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:35.274402   28441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:35.293102   28441 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:47:35.293138   28441 api_server.go:166] Checking apiserver status ...
	I0719 14:47:35.293181   28441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:47:35.310524   28441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0719 14:47:35.320963   28441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:47:35.321009   28441 ssh_runner.go:195] Run: ls
	I0719 14:47:35.325351   28441 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:47:35.329779   28441 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:47:35.329799   28441 status.go:422] ha-999305-m03 apiserver status = Running (err=<nil>)
	I0719 14:47:35.329809   28441 status.go:257] ha-999305-m03 status: &{Name:ha-999305-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:47:35.329828   28441 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:47:35.330189   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.330229   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.344953   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
	I0719 14:47:35.345320   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.345837   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.345861   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.346354   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.346541   28441 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:35.348271   28441 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:47:35.348282   28441 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:35.348538   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.348566   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.362717   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I0719 14:47:35.363072   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.363624   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.363642   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.363957   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.364215   28441 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:47:35.367436   28441 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:35.367815   28441 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:35.367852   28441 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:35.367972   28441 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:47:35.368256   28441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:35.368289   28441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:35.383988   28441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I0719 14:47:35.384346   28441 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:35.384822   28441 main.go:141] libmachine: Using API Version  1
	I0719 14:47:35.384842   28441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:35.385126   28441 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:35.385295   28441 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:35.385430   28441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:47:35.385454   28441 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:35.388646   28441 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:35.389118   28441 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:35.389148   28441 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:35.389311   28441 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:35.389489   28441 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:35.389639   28441 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:35.389785   28441 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:35.478781   28441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:47:35.495313   28441 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-999305 -n ha-999305
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-999305 logs -n 25: (1.380734585s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305:/home/docker/cp-test_ha-999305-m03_ha-999305.txt                      |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305 sudo cat                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305.txt                                |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m04 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp testdata/cp-test.txt                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305:/home/docker/cp-test_ha-999305-m04_ha-999305.txt                      |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305 sudo cat                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305.txt                                |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03:/home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m03 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-999305 node stop m02 -v=7                                                    | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-999305 node start m02 -v=7                                                   | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:38:27
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:38:27.765006   22606 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:38:27.765117   22606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:38:27.765126   22606 out.go:304] Setting ErrFile to fd 2...
	I0719 14:38:27.765130   22606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:38:27.765290   22606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:38:27.765798   22606 out.go:298] Setting JSON to false
	I0719 14:38:27.766611   22606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1254,"bootTime":1721398654,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:38:27.766664   22606 start.go:139] virtualization: kvm guest
	I0719 14:38:27.769503   22606 out.go:177] * [ha-999305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:38:27.771032   22606 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:38:27.771040   22606 notify.go:220] Checking for updates...
	I0719 14:38:27.772433   22606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:38:27.773676   22606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:38:27.774784   22606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:38:27.775922   22606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:38:27.777176   22606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:38:27.778492   22606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:38:27.811750   22606 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 14:38:27.813006   22606 start.go:297] selected driver: kvm2
	I0719 14:38:27.813016   22606 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:38:27.813026   22606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:38:27.813652   22606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:38:27.813725   22606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:38:27.827592   22606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:38:27.827638   22606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:38:27.827824   22606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:38:27.827873   22606 cni.go:84] Creating CNI manager for ""
	I0719 14:38:27.827884   22606 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 14:38:27.827889   22606 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 14:38:27.827960   22606 start.go:340] cluster config:
	{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0719 14:38:27.828052   22606 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:38:27.830672   22606 out.go:177] * Starting "ha-999305" primary control-plane node in "ha-999305" cluster
	I0719 14:38:27.831782   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:38:27.831806   22606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:38:27.831812   22606 cache.go:56] Caching tarball of preloaded images
	I0719 14:38:27.831873   22606 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:38:27.831882   22606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:38:27.832170   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:38:27.832189   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json: {Name:mkc4d7b141210cfb52ece9bf78a8c556f395293d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:27.832311   22606 start.go:360] acquireMachinesLock for ha-999305: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:38:27.832339   22606 start.go:364] duration metric: took 14.571µs to acquireMachinesLock for "ha-999305"
	I0719 14:38:27.832354   22606 start.go:93] Provisioning new machine with config: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:38:27.832414   22606 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 14:38:27.834522   22606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 14:38:27.834635   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:38:27.834665   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:38:27.847897   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0719 14:38:27.848323   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:38:27.848912   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:38:27.848935   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:38:27.849226   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:38:27.849416   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:27.849537   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:27.849644   22606 start.go:159] libmachine.API.Create for "ha-999305" (driver="kvm2")
	I0719 14:38:27.849662   22606 client.go:168] LocalClient.Create starting
	I0719 14:38:27.849686   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:38:27.849711   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:38:27.849730   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:38:27.849772   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:38:27.849789   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:38:27.849799   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:38:27.849816   22606 main.go:141] libmachine: Running pre-create checks...
	I0719 14:38:27.849823   22606 main.go:141] libmachine: (ha-999305) Calling .PreCreateCheck
	I0719 14:38:27.850098   22606 main.go:141] libmachine: (ha-999305) Calling .GetConfigRaw
	I0719 14:38:27.850513   22606 main.go:141] libmachine: Creating machine...
	I0719 14:38:27.850530   22606 main.go:141] libmachine: (ha-999305) Calling .Create
	I0719 14:38:27.850636   22606 main.go:141] libmachine: (ha-999305) Creating KVM machine...
	I0719 14:38:27.851824   22606 main.go:141] libmachine: (ha-999305) DBG | found existing default KVM network
	I0719 14:38:27.852427   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:27.852314   22629 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0719 14:38:27.852449   22606 main.go:141] libmachine: (ha-999305) DBG | created network xml: 
	I0719 14:38:27.852461   22606 main.go:141] libmachine: (ha-999305) DBG | <network>
	I0719 14:38:27.852467   22606 main.go:141] libmachine: (ha-999305) DBG |   <name>mk-ha-999305</name>
	I0719 14:38:27.852476   22606 main.go:141] libmachine: (ha-999305) DBG |   <dns enable='no'/>
	I0719 14:38:27.852487   22606 main.go:141] libmachine: (ha-999305) DBG |   
	I0719 14:38:27.852499   22606 main.go:141] libmachine: (ha-999305) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 14:38:27.852514   22606 main.go:141] libmachine: (ha-999305) DBG |     <dhcp>
	I0719 14:38:27.852520   22606 main.go:141] libmachine: (ha-999305) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 14:38:27.852526   22606 main.go:141] libmachine: (ha-999305) DBG |     </dhcp>
	I0719 14:38:27.852533   22606 main.go:141] libmachine: (ha-999305) DBG |   </ip>
	I0719 14:38:27.852539   22606 main.go:141] libmachine: (ha-999305) DBG |   
	I0719 14:38:27.852544   22606 main.go:141] libmachine: (ha-999305) DBG | </network>
	I0719 14:38:27.852551   22606 main.go:141] libmachine: (ha-999305) DBG | 
	I0719 14:38:27.858073   22606 main.go:141] libmachine: (ha-999305) DBG | trying to create private KVM network mk-ha-999305 192.168.39.0/24...
	I0719 14:38:27.918530   22606 main.go:141] libmachine: (ha-999305) DBG | private KVM network mk-ha-999305 192.168.39.0/24 created
	I0719 14:38:27.918562   22606 main.go:141] libmachine: (ha-999305) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305 ...
	I0719 14:38:27.918585   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:27.918519   22629 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:38:27.918602   22606 main.go:141] libmachine: (ha-999305) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:38:27.918738   22606 main.go:141] libmachine: (ha-999305) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:38:28.144018   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:28.143897   22629 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa...
	I0719 14:38:28.331688   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:28.331580   22629 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/ha-999305.rawdisk...
	I0719 14:38:28.331715   22606 main.go:141] libmachine: (ha-999305) DBG | Writing magic tar header
	I0719 14:38:28.331724   22606 main.go:141] libmachine: (ha-999305) DBG | Writing SSH key tar header
	I0719 14:38:28.331732   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:28.331705   22629 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305 ...
	I0719 14:38:28.331855   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305
	I0719 14:38:28.331885   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305 (perms=drwx------)
	I0719 14:38:28.331895   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:38:28.331909   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:38:28.331918   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:38:28.331931   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:38:28.331942   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:38:28.331951   22606 main.go:141] libmachine: (ha-999305) DBG | Checking permissions on dir: /home
	I0719 14:38:28.331963   22606 main.go:141] libmachine: (ha-999305) DBG | Skipping /home - not owner
	I0719 14:38:28.331973   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:38:28.331985   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:38:28.331994   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:38:28.332009   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:38:28.332020   22606 main.go:141] libmachine: (ha-999305) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:38:28.332031   22606 main.go:141] libmachine: (ha-999305) Creating domain...
	I0719 14:38:28.332951   22606 main.go:141] libmachine: (ha-999305) define libvirt domain using xml: 
	I0719 14:38:28.332977   22606 main.go:141] libmachine: (ha-999305) <domain type='kvm'>
	I0719 14:38:28.332987   22606 main.go:141] libmachine: (ha-999305)   <name>ha-999305</name>
	I0719 14:38:28.333003   22606 main.go:141] libmachine: (ha-999305)   <memory unit='MiB'>2200</memory>
	I0719 14:38:28.333017   22606 main.go:141] libmachine: (ha-999305)   <vcpu>2</vcpu>
	I0719 14:38:28.333029   22606 main.go:141] libmachine: (ha-999305)   <features>
	I0719 14:38:28.333041   22606 main.go:141] libmachine: (ha-999305)     <acpi/>
	I0719 14:38:28.333066   22606 main.go:141] libmachine: (ha-999305)     <apic/>
	I0719 14:38:28.333085   22606 main.go:141] libmachine: (ha-999305)     <pae/>
	I0719 14:38:28.333105   22606 main.go:141] libmachine: (ha-999305)     
	I0719 14:38:28.333113   22606 main.go:141] libmachine: (ha-999305)   </features>
	I0719 14:38:28.333118   22606 main.go:141] libmachine: (ha-999305)   <cpu mode='host-passthrough'>
	I0719 14:38:28.333125   22606 main.go:141] libmachine: (ha-999305)   
	I0719 14:38:28.333130   22606 main.go:141] libmachine: (ha-999305)   </cpu>
	I0719 14:38:28.333137   22606 main.go:141] libmachine: (ha-999305)   <os>
	I0719 14:38:28.333142   22606 main.go:141] libmachine: (ha-999305)     <type>hvm</type>
	I0719 14:38:28.333149   22606 main.go:141] libmachine: (ha-999305)     <boot dev='cdrom'/>
	I0719 14:38:28.333153   22606 main.go:141] libmachine: (ha-999305)     <boot dev='hd'/>
	I0719 14:38:28.333161   22606 main.go:141] libmachine: (ha-999305)     <bootmenu enable='no'/>
	I0719 14:38:28.333167   22606 main.go:141] libmachine: (ha-999305)   </os>
	I0719 14:38:28.333180   22606 main.go:141] libmachine: (ha-999305)   <devices>
	I0719 14:38:28.333191   22606 main.go:141] libmachine: (ha-999305)     <disk type='file' device='cdrom'>
	I0719 14:38:28.333220   22606 main.go:141] libmachine: (ha-999305)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/boot2docker.iso'/>
	I0719 14:38:28.333240   22606 main.go:141] libmachine: (ha-999305)       <target dev='hdc' bus='scsi'/>
	I0719 14:38:28.333261   22606 main.go:141] libmachine: (ha-999305)       <readonly/>
	I0719 14:38:28.333281   22606 main.go:141] libmachine: (ha-999305)     </disk>
	I0719 14:38:28.333296   22606 main.go:141] libmachine: (ha-999305)     <disk type='file' device='disk'>
	I0719 14:38:28.333309   22606 main.go:141] libmachine: (ha-999305)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:38:28.333323   22606 main.go:141] libmachine: (ha-999305)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/ha-999305.rawdisk'/>
	I0719 14:38:28.333336   22606 main.go:141] libmachine: (ha-999305)       <target dev='hda' bus='virtio'/>
	I0719 14:38:28.333348   22606 main.go:141] libmachine: (ha-999305)     </disk>
	I0719 14:38:28.333364   22606 main.go:141] libmachine: (ha-999305)     <interface type='network'>
	I0719 14:38:28.333379   22606 main.go:141] libmachine: (ha-999305)       <source network='mk-ha-999305'/>
	I0719 14:38:28.333391   22606 main.go:141] libmachine: (ha-999305)       <model type='virtio'/>
	I0719 14:38:28.333405   22606 main.go:141] libmachine: (ha-999305)     </interface>
	I0719 14:38:28.333417   22606 main.go:141] libmachine: (ha-999305)     <interface type='network'>
	I0719 14:38:28.333431   22606 main.go:141] libmachine: (ha-999305)       <source network='default'/>
	I0719 14:38:28.333448   22606 main.go:141] libmachine: (ha-999305)       <model type='virtio'/>
	I0719 14:38:28.333469   22606 main.go:141] libmachine: (ha-999305)     </interface>
	I0719 14:38:28.333480   22606 main.go:141] libmachine: (ha-999305)     <serial type='pty'>
	I0719 14:38:28.333494   22606 main.go:141] libmachine: (ha-999305)       <target port='0'/>
	I0719 14:38:28.333505   22606 main.go:141] libmachine: (ha-999305)     </serial>
	I0719 14:38:28.333517   22606 main.go:141] libmachine: (ha-999305)     <console type='pty'>
	I0719 14:38:28.333525   22606 main.go:141] libmachine: (ha-999305)       <target type='serial' port='0'/>
	I0719 14:38:28.333533   22606 main.go:141] libmachine: (ha-999305)     </console>
	I0719 14:38:28.333540   22606 main.go:141] libmachine: (ha-999305)     <rng model='virtio'>
	I0719 14:38:28.333546   22606 main.go:141] libmachine: (ha-999305)       <backend model='random'>/dev/random</backend>
	I0719 14:38:28.333552   22606 main.go:141] libmachine: (ha-999305)     </rng>
	I0719 14:38:28.333556   22606 main.go:141] libmachine: (ha-999305)     
	I0719 14:38:28.333562   22606 main.go:141] libmachine: (ha-999305)     
	I0719 14:38:28.333567   22606 main.go:141] libmachine: (ha-999305)   </devices>
	I0719 14:38:28.333574   22606 main.go:141] libmachine: (ha-999305) </domain>
	I0719 14:38:28.333580   22606 main.go:141] libmachine: (ha-999305) 
	I0719 14:38:28.337739   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:e7:36:0d in network default
	I0719 14:38:28.338175   22606 main.go:141] libmachine: (ha-999305) Ensuring networks are active...
	I0719 14:38:28.338194   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:28.338905   22606 main.go:141] libmachine: (ha-999305) Ensuring network default is active
	I0719 14:38:28.339205   22606 main.go:141] libmachine: (ha-999305) Ensuring network mk-ha-999305 is active
	I0719 14:38:28.339633   22606 main.go:141] libmachine: (ha-999305) Getting domain xml...
	I0719 14:38:28.340215   22606 main.go:141] libmachine: (ha-999305) Creating domain...
	I0719 14:38:29.493645   22606 main.go:141] libmachine: (ha-999305) Waiting to get IP...
	I0719 14:38:29.494268   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:29.494651   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:29.494674   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:29.494599   22629 retry.go:31] will retry after 295.963865ms: waiting for machine to come up
	I0719 14:38:29.792057   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:29.792405   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:29.792423   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:29.792360   22629 retry.go:31] will retry after 387.809257ms: waiting for machine to come up
	I0719 14:38:30.181895   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:30.182366   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:30.182410   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:30.182334   22629 retry.go:31] will retry after 306.839378ms: waiting for machine to come up
	I0719 14:38:30.490760   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:30.491198   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:30.491227   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:30.491149   22629 retry.go:31] will retry after 425.660464ms: waiting for machine to come up
	I0719 14:38:30.918594   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:30.918991   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:30.919012   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:30.918949   22629 retry.go:31] will retry after 501.872394ms: waiting for machine to come up
	I0719 14:38:31.422669   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:31.423199   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:31.423220   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:31.423161   22629 retry.go:31] will retry after 953.109864ms: waiting for machine to come up
	I0719 14:38:32.377483   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:32.377897   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:32.377944   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:32.377834   22629 retry.go:31] will retry after 717.613082ms: waiting for machine to come up
	I0719 14:38:33.097393   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:33.097744   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:33.097775   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:33.097692   22629 retry.go:31] will retry after 1.362631393s: waiting for machine to come up
	I0719 14:38:34.462110   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:34.462632   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:34.462652   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:34.462596   22629 retry.go:31] will retry after 1.619727371s: waiting for machine to come up
	I0719 14:38:36.084335   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:36.084838   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:36.084862   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:36.084766   22629 retry.go:31] will retry after 1.838449443s: waiting for machine to come up
	I0719 14:38:37.924319   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:37.924749   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:37.924764   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:37.924690   22629 retry.go:31] will retry after 2.845704536s: waiting for machine to come up
	I0719 14:38:40.773565   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:40.773913   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:40.773937   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:40.773887   22629 retry.go:31] will retry after 3.088536072s: waiting for machine to come up
	I0719 14:38:43.863936   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:43.864398   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find current IP address of domain ha-999305 in network mk-ha-999305
	I0719 14:38:43.864427   22606 main.go:141] libmachine: (ha-999305) DBG | I0719 14:38:43.864363   22629 retry.go:31] will retry after 3.174729971s: waiting for machine to come up
	I0719 14:38:47.042692   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.043188   22606 main.go:141] libmachine: (ha-999305) Found IP for machine: 192.168.39.240
	I0719 14:38:47.043210   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has current primary IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.043219   22606 main.go:141] libmachine: (ha-999305) Reserving static IP address...
	I0719 14:38:47.043580   22606 main.go:141] libmachine: (ha-999305) DBG | unable to find host DHCP lease matching {name: "ha-999305", mac: "52:54:00:c3:55:82", ip: "192.168.39.240"} in network mk-ha-999305
	I0719 14:38:47.115495   22606 main.go:141] libmachine: (ha-999305) DBG | Getting to WaitForSSH function...
	I0719 14:38:47.115527   22606 main.go:141] libmachine: (ha-999305) Reserved static IP address: 192.168.39.240
	I0719 14:38:47.115539   22606 main.go:141] libmachine: (ha-999305) Waiting for SSH to be available...
	I0719 14:38:47.118059   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.118362   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.118391   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.118503   22606 main.go:141] libmachine: (ha-999305) DBG | Using SSH client type: external
	I0719 14:38:47.118546   22606 main.go:141] libmachine: (ha-999305) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa (-rw-------)
	I0719 14:38:47.118579   22606 main.go:141] libmachine: (ha-999305) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:38:47.118592   22606 main.go:141] libmachine: (ha-999305) DBG | About to run SSH command:
	I0719 14:38:47.118644   22606 main.go:141] libmachine: (ha-999305) DBG | exit 0
	I0719 14:38:47.246392   22606 main.go:141] libmachine: (ha-999305) DBG | SSH cmd err, output: <nil>: 
	I0719 14:38:47.246642   22606 main.go:141] libmachine: (ha-999305) KVM machine creation complete!
	I0719 14:38:47.246965   22606 main.go:141] libmachine: (ha-999305) Calling .GetConfigRaw
	I0719 14:38:47.247662   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:47.247932   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:47.248069   22606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:38:47.248082   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:38:47.249402   22606 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:38:47.249415   22606 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:38:47.249420   22606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:38:47.249426   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.251491   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.251876   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.251905   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.252078   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.252243   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.252398   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.252537   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.252693   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.252934   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.252950   22606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:38:47.353484   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:38:47.353506   22606 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:38:47.353513   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.356224   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.356483   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.356522   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.356650   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.356875   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.357030   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.357168   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.357339   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.357500   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.357511   22606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:38:47.459016   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:38:47.459102   22606 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:38:47.459116   22606 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:38:47.459127   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:47.459414   22606 buildroot.go:166] provisioning hostname "ha-999305"
	I0719 14:38:47.459444   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:47.459631   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.462132   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.462441   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.462467   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.462620   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.462786   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.462939   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.463058   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.463188   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.463435   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.463458   22606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305 && echo "ha-999305" | sudo tee /etc/hostname
	I0719 14:38:47.580240   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305
	
	I0719 14:38:47.580268   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.582743   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.582986   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.583015   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.583171   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.583357   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.583515   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.583662   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.583784   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.583963   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.583978   22606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:38:47.695077   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:38:47.695106   22606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:38:47.695146   22606 buildroot.go:174] setting up certificates
	I0719 14:38:47.695160   22606 provision.go:84] configureAuth start
	I0719 14:38:47.695178   22606 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:38:47.695452   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:47.698001   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.698345   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.698368   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.698506   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.700248   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.700536   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.700560   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.700699   22606 provision.go:143] copyHostCerts
	I0719 14:38:47.700737   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:38:47.700774   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:38:47.700786   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:38:47.700866   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:38:47.700979   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:38:47.701007   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:38:47.701017   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:38:47.701057   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:38:47.701129   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:38:47.701153   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:38:47.701161   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:38:47.701199   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:38:47.701284   22606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305 san=[127.0.0.1 192.168.39.240 ha-999305 localhost minikube]
	I0719 14:38:47.802791   22606 provision.go:177] copyRemoteCerts
	I0719 14:38:47.802843   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:38:47.802876   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.805089   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.805452   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.805486   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.805646   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.805850   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.806018   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.806219   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:47.888214   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:38:47.888293   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:38:47.913325   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:38:47.913403   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0719 14:38:47.936706   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:38:47.936767   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 14:38:47.959625   22606 provision.go:87] duration metric: took 264.451004ms to configureAuth
	I0719 14:38:47.959664   22606 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:38:47.959864   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:38:47.959932   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:47.962555   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.962980   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:47.963003   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:47.963203   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:47.963516   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.963686   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:47.963824   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:47.964050   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:47.964233   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:47.964253   22606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:38:48.223779   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:38:48.223811   22606 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:38:48.223821   22606 main.go:141] libmachine: (ha-999305) Calling .GetURL
	I0719 14:38:48.225043   22606 main.go:141] libmachine: (ha-999305) DBG | Using libvirt version 6000000
	I0719 14:38:48.227409   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.227726   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.227746   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.227905   22606 main.go:141] libmachine: Docker is up and running!
	I0719 14:38:48.227915   22606 main.go:141] libmachine: Reticulating splines...
	I0719 14:38:48.227921   22606 client.go:171] duration metric: took 20.378250961s to LocalClient.Create
	I0719 14:38:48.227941   22606 start.go:167] duration metric: took 20.378296192s to libmachine.API.Create "ha-999305"
	I0719 14:38:48.227952   22606 start.go:293] postStartSetup for "ha-999305" (driver="kvm2")
	I0719 14:38:48.227964   22606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:38:48.227981   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.228194   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:38:48.228222   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.230468   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.230765   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.230802   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.230952   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.231116   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.231279   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.231433   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:48.317347   22606 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:38:48.321759   22606 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:38:48.321782   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:38:48.321837   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:38:48.321930   22606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:38:48.321947   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:38:48.322071   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:38:48.331283   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:38:48.354411   22606 start.go:296] duration metric: took 126.447804ms for postStartSetup
	I0719 14:38:48.354456   22606 main.go:141] libmachine: (ha-999305) Calling .GetConfigRaw
	I0719 14:38:48.354981   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:48.357345   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.357624   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.357652   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.357853   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:38:48.358033   22606 start.go:128] duration metric: took 20.525608686s to createHost
	I0719 14:38:48.358061   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.360195   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.360459   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.360482   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.360587   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.360766   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.360930   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.361091   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.361370   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:38:48.361555   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:38:48.361566   22606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:38:48.462972   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399928.436594017
	
	I0719 14:38:48.462998   22606 fix.go:216] guest clock: 1721399928.436594017
	I0719 14:38:48.463010   22606 fix.go:229] Guest: 2024-07-19 14:38:48.436594017 +0000 UTC Remote: 2024-07-19 14:38:48.358048748 +0000 UTC m=+20.625559847 (delta=78.545269ms)
	I0719 14:38:48.463035   22606 fix.go:200] guest clock delta is within tolerance: 78.545269ms
	I0719 14:38:48.463044   22606 start.go:83] releasing machines lock for "ha-999305", held for 20.630696786s
	I0719 14:38:48.463068   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.463333   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:48.465876   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.466192   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.466219   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.466308   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.466805   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.466986   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:38:48.467096   22606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:38:48.467143   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.467203   22606 ssh_runner.go:195] Run: cat /version.json
	I0719 14:38:48.467228   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:38:48.469675   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.469826   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.470059   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.470086   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.470216   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:48.470221   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.470249   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:48.470419   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.470420   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:38:48.470601   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:38:48.470652   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.470757   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:38:48.470818   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:48.470868   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:38:48.567803   22606 ssh_runner.go:195] Run: systemctl --version
	I0719 14:38:48.574111   22606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:38:48.740377   22606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:38:48.746159   22606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:38:48.746225   22606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:38:48.762844   22606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:38:48.762870   22606 start.go:495] detecting cgroup driver to use...
	I0719 14:38:48.762932   22606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:38:48.778652   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:38:48.791736   22606 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:38:48.791783   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:38:48.804235   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:38:48.817135   22606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:38:48.926826   22606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:38:49.082076   22606 docker.go:233] disabling docker service ...
	I0719 14:38:49.082147   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:38:49.096477   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:38:49.110382   22606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:38:49.224555   22606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:38:49.345654   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:38:49.359204   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:38:49.378664   22606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:38:49.378741   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.389179   22606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:38:49.389249   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.399339   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.409418   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.419395   22606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:38:49.430021   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.440058   22606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.457072   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:38:49.467171   22606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:38:49.476795   22606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:38:49.476855   22606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:38:49.489479   22606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:38:49.498837   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:38:49.634942   22606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:38:49.771916   22606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:38:49.772021   22606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:38:49.776803   22606 start.go:563] Will wait 60s for crictl version
	I0719 14:38:49.776866   22606 ssh_runner.go:195] Run: which crictl
	I0719 14:38:49.780613   22606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:38:49.819994   22606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:38:49.820071   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:38:49.847398   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:38:49.877142   22606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:38:49.878338   22606 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:38:49.880976   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:49.881292   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:38:49.881322   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:38:49.881561   22606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:38:49.886198   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:38:49.899497   22606 kubeadm.go:883] updating cluster {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 14:38:49.899616   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:38:49.899660   22606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:38:49.932339   22606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 14:38:49.932403   22606 ssh_runner.go:195] Run: which lz4
	I0719 14:38:49.936559   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 14:38:49.936644   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 14:38:49.940961   22606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 14:38:49.940990   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 14:38:51.358477   22606 crio.go:462] duration metric: took 1.421860886s to copy over tarball
	I0719 14:38:51.358571   22606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 14:38:53.498960   22606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.140358762s)
	I0719 14:38:53.498993   22606 crio.go:469] duration metric: took 2.140487816s to extract the tarball
	I0719 14:38:53.499003   22606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 14:38:53.537877   22606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:38:53.584148   22606 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:38:53.584172   22606 cache_images.go:84] Images are preloaded, skipping loading
	I0719 14:38:53.584180   22606 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.30.3 crio true true} ...
	I0719 14:38:53.584270   22606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:38:53.584333   22606 ssh_runner.go:195] Run: crio config
	I0719 14:38:53.633383   22606 cni.go:84] Creating CNI manager for ""
	I0719 14:38:53.633405   22606 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 14:38:53.633416   22606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 14:38:53.633445   22606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-999305 NodeName:ha-999305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 14:38:53.633631   22606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-999305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 14:38:53.633664   22606 kube-vip.go:115] generating kube-vip config ...
	I0719 14:38:53.633715   22606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:38:53.652624   22606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:38:53.652727   22606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:38:53.652783   22606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:38:53.661917   22606 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 14:38:53.661966   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 14:38:53.671190   22606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 14:38:53.687918   22606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:38:53.704052   22606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 14:38:53.719908   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 14:38:53.736366   22606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:38:53.740336   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:38:53.751786   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:38:53.867207   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:38:53.883522   22606 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.240
	I0719 14:38:53.883542   22606 certs.go:194] generating shared ca certs ...
	I0719 14:38:53.883556   22606 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:53.883721   22606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:38:53.883785   22606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:38:53.883799   22606 certs.go:256] generating profile certs ...
	I0719 14:38:53.883856   22606 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:38:53.883874   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt with IP's: []
	I0719 14:38:53.979360   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt ...
	I0719 14:38:53.979383   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt: {Name:mkf392f6ff96dcc81bc3397b7b50c1b32ca916bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:53.979549   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key ...
	I0719 14:38:53.979565   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key: {Name:mk9acb9a9e075ab14413f6b865c2de54fa24f9bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:53.979662   22606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283
	I0719 14:38:53.979678   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.254]
	I0719 14:38:54.074807   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283 ...
	I0719 14:38:54.074835   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283: {Name:mkffb203a8ae205ca72ec4f55d228de23ee28a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.075023   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283 ...
	I0719 14:38:54.075043   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283: {Name:mkfeac060f4d29cac912c99484ff2e43f59647a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.075136   22606 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a8cbc283 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:38:54.075240   22606 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a8cbc283 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:38:54.075312   22606 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:38:54.075333   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt with IP's: []
	I0719 14:38:54.254701   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt ...
	I0719 14:38:54.254728   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt: {Name:mkf836da894897ca036860c077d099e64d3f6625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.254892   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key ...
	I0719 14:38:54.254906   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key: {Name:mk184db1c4e6cd1691efdc781b94dc81c19a79ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:38:54.255009   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:38:54.255034   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:38:54.255052   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:38:54.255067   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:38:54.255080   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:38:54.255095   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:38:54.255111   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:38:54.255129   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:38:54.255212   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:38:54.255263   22606 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:38:54.255273   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:38:54.255306   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:38:54.255336   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:38:54.255365   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:38:54.255418   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:38:54.255453   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.255471   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.255489   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.256077   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:38:54.282002   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:38:54.307677   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:38:54.331324   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:38:54.357398   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 14:38:54.384206   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 14:38:54.409121   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:38:54.434492   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:38:54.459718   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:38:54.482705   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:38:54.507249   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:38:54.531717   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 14:38:54.549411   22606 ssh_runner.go:195] Run: openssl version
	I0719 14:38:54.555580   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:38:54.569012   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.573835   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.573890   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:38:54.580256   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:38:54.592664   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:38:54.605015   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.609559   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.609611   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:38:54.615522   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:38:54.631644   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:38:54.657012   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.663750   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.663811   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:38:54.671470   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:38:54.686203   22606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:38:54.693263   22606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:38:54.693307   22606 kubeadm.go:392] StartCluster: {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:38:54.693381   22606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 14:38:54.693419   22606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 14:38:54.733210   22606 cri.go:89] found id: ""
	I0719 14:38:54.733278   22606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 14:38:54.743778   22606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 14:38:54.754335   22606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 14:38:54.764986   22606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 14:38:54.765014   22606 kubeadm.go:157] found existing configuration files:
	
	I0719 14:38:54.765059   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 14:38:54.774137   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 14:38:54.774186   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 14:38:54.783474   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 14:38:54.792386   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 14:38:54.792447   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 14:38:54.802047   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 14:38:54.811883   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 14:38:54.811942   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 14:38:54.821861   22606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 14:38:54.831769   22606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 14:38:54.831831   22606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 14:38:54.841240   22606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 14:38:54.956454   22606 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 14:38:54.956546   22606 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 14:38:55.082082   22606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 14:38:55.082228   22606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 14:38:55.082370   22606 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 14:38:55.291658   22606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 14:38:55.510421   22606 out.go:204]   - Generating certificates and keys ...
	I0719 14:38:55.510535   22606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 14:38:55.510638   22606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 14:38:55.510750   22606 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 14:38:55.592169   22606 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 14:38:55.737981   22606 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 14:38:55.819674   22606 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 14:38:56.000594   22606 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 14:38:56.000999   22606 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-999305 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0719 14:38:56.093074   22606 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 14:38:56.093209   22606 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-999305 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0719 14:38:56.250361   22606 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 14:38:56.567810   22606 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 14:38:56.854088   22606 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 14:38:56.854393   22606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 14:38:56.968705   22606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 14:38:57.113690   22606 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 14:38:57.231733   22606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 14:38:57.346496   22606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 14:38:57.631011   22606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 14:38:57.631462   22606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 14:38:57.633930   22606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 14:38:57.636047   22606 out.go:204]   - Booting up control plane ...
	I0719 14:38:57.636156   22606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 14:38:57.636251   22606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 14:38:57.636353   22606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 14:38:57.650374   22606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 14:38:57.651176   22606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 14:38:57.651218   22606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 14:38:57.778939   22606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 14:38:57.779040   22606 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 14:38:58.279844   22606 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.402308ms
	I0719 14:38:58.279929   22606 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 14:39:04.334957   22606 kubeadm.go:310] [api-check] The API server is healthy after 6.055657626s
	I0719 14:39:04.346343   22606 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 14:39:04.366875   22606 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 14:39:04.897879   22606 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 14:39:04.898112   22606 kubeadm.go:310] [mark-control-plane] Marking the node ha-999305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 14:39:04.911099   22606 kubeadm.go:310] [bootstrap-token] Using token: y3wvba.2pi3h6tz5c5qfy1e
	I0719 14:39:04.912495   22606 out.go:204]   - Configuring RBAC rules ...
	I0719 14:39:04.912635   22606 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 14:39:04.923852   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 14:39:04.931874   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 14:39:04.935251   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 14:39:04.938428   22606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 14:39:04.942392   22606 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 14:39:04.957243   22606 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 14:39:05.220162   22606 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 14:39:05.738653   22606 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 14:39:05.738674   22606 kubeadm.go:310] 
	I0719 14:39:05.738726   22606 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 14:39:05.738733   22606 kubeadm.go:310] 
	I0719 14:39:05.738810   22606 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 14:39:05.738820   22606 kubeadm.go:310] 
	I0719 14:39:05.738856   22606 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 14:39:05.738932   22606 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 14:39:05.738977   22606 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 14:39:05.738982   22606 kubeadm.go:310] 
	I0719 14:39:05.739071   22606 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 14:39:05.739095   22606 kubeadm.go:310] 
	I0719 14:39:05.739170   22606 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 14:39:05.739182   22606 kubeadm.go:310] 
	I0719 14:39:05.739259   22606 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 14:39:05.739365   22606 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 14:39:05.739465   22606 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 14:39:05.739477   22606 kubeadm.go:310] 
	I0719 14:39:05.739591   22606 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 14:39:05.739696   22606 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 14:39:05.739717   22606 kubeadm.go:310] 
	I0719 14:39:05.739845   22606 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y3wvba.2pi3h6tz5c5qfy1e \
	I0719 14:39:05.739950   22606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 14:39:05.739969   22606 kubeadm.go:310] 	--control-plane 
	I0719 14:39:05.739974   22606 kubeadm.go:310] 
	I0719 14:39:05.740037   22606 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 14:39:05.740043   22606 kubeadm.go:310] 
	I0719 14:39:05.740117   22606 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y3wvba.2pi3h6tz5c5qfy1e \
	I0719 14:39:05.740195   22606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 14:39:05.740807   22606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 14:39:05.740847   22606 cni.go:84] Creating CNI manager for ""
	I0719 14:39:05.740861   22606 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 14:39:05.742724   22606 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 14:39:05.743994   22606 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 14:39:05.749373   22606 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 14:39:05.749391   22606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 14:39:05.767344   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 14:39:06.143874   22606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 14:39:06.143951   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:06.143964   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-999305 minikube.k8s.io/updated_at=2024_07_19T14_39_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=ha-999305 minikube.k8s.io/primary=true
	I0719 14:39:06.276600   22606 ops.go:34] apiserver oom_adj: -16
	I0719 14:39:06.276768   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:06.777050   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:07.277195   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:07.777067   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:08.277603   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:08.777781   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:09.277109   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:09.777094   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:10.276907   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:10.777577   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:11.276979   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:11.776980   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:12.277746   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:12.777823   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:13.276898   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:13.777065   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:14.276791   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:14.777109   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:15.277778   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:15.777382   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:16.277691   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:16.777000   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:17.277794   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 14:39:17.371835   22606 kubeadm.go:1113] duration metric: took 11.227943395s to wait for elevateKubeSystemPrivileges
	I0719 14:39:17.371869   22606 kubeadm.go:394] duration metric: took 22.678563939s to StartCluster
	I0719 14:39:17.371889   22606 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:17.371962   22606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:39:17.372666   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:17.372913   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 14:39:17.372932   22606 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 14:39:17.372998   22606 addons.go:69] Setting storage-provisioner=true in profile "ha-999305"
	I0719 14:39:17.373032   22606 addons.go:69] Setting default-storageclass=true in profile "ha-999305"
	I0719 14:39:17.373101   22606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-999305"
	I0719 14:39:17.373142   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:17.372911   22606 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:39:17.373192   22606 start.go:241] waiting for startup goroutines ...
	I0719 14:39:17.373021   22606 addons.go:234] Setting addon storage-provisioner=true in "ha-999305"
	I0719 14:39:17.373232   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:39:17.373573   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.373600   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.373621   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.373629   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.388259   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46051
	I0719 14:39:17.388444   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0719 14:39:17.388729   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.388915   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.389279   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.389303   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.389429   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.389448   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.389622   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.389770   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.389795   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:17.390338   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.390377   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.392008   22606 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:39:17.392239   22606 kapi.go:59] client config for ha-999305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 14:39:17.392683   22606 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 14:39:17.392821   22606 addons.go:234] Setting addon default-storageclass=true in "ha-999305"
	I0719 14:39:17.392850   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:39:17.393106   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.393138   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.404416   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35427
	I0719 14:39:17.404861   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.405327   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.405352   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.405638   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.405811   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:17.407397   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0719 14:39:17.407444   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:39:17.407768   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.408111   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.408128   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.408433   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.408857   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:17.408892   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:17.409515   22606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 14:39:17.410920   22606 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:39:17.410939   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 14:39:17.410962   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:39:17.413815   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.414268   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:39:17.414295   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.414363   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:39:17.414522   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:39:17.414665   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:39:17.414859   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:39:17.424410   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0719 14:39:17.424750   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:17.425223   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:17.425243   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:17.425547   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:17.425758   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:17.427230   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:39:17.427415   22606 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 14:39:17.427439   22606 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 14:39:17.427457   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:39:17.429845   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.430164   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:39:17.430182   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:17.430375   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:39:17.430498   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:39:17.430657   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:39:17.430767   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:39:17.473439   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 14:39:17.556123   22606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 14:39:17.581965   22606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 14:39:17.880786   22606 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 14:39:18.184337   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184361   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.184533   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184553   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.184706   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.184747   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.184783   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.184808   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.184821   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184823   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.184829   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.184833   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.184843   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.184852   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.185020   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.185041   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.185131   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.185149   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.185161   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.185256   22606 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 14:39:18.185266   22606 round_trippers.go:469] Request Headers:
	I0719 14:39:18.185276   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:39:18.185282   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:39:18.199781   22606 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0719 14:39:18.200571   22606 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 14:39:18.200586   22606 round_trippers.go:469] Request Headers:
	I0719 14:39:18.200616   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:39:18.200625   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:39:18.200628   22606 round_trippers.go:473]     Content-Type: application/json
	I0719 14:39:18.217314   22606 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 14:39:18.217484   22606 main.go:141] libmachine: Making call to close driver server
	I0719 14:39:18.217501   22606 main.go:141] libmachine: (ha-999305) Calling .Close
	I0719 14:39:18.217806   22606 main.go:141] libmachine: Successfully made call to close driver server
	I0719 14:39:18.217825   22606 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 14:39:18.217830   22606 main.go:141] libmachine: (ha-999305) DBG | Closing plugin on server side
	I0719 14:39:18.219393   22606 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 14:39:18.220509   22606 addons.go:510] duration metric: took 847.580492ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 14:39:18.220541   22606 start.go:246] waiting for cluster config update ...
	I0719 14:39:18.220556   22606 start.go:255] writing updated cluster config ...
	I0719 14:39:18.222000   22606 out.go:177] 
	I0719 14:39:18.223231   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:18.223309   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:39:18.224712   22606 out.go:177] * Starting "ha-999305-m02" control-plane node in "ha-999305" cluster
	I0719 14:39:18.225863   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:39:18.225891   22606 cache.go:56] Caching tarball of preloaded images
	I0719 14:39:18.226007   22606 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:39:18.226023   22606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:39:18.226115   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:39:18.226373   22606 start.go:360] acquireMachinesLock for ha-999305-m02: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:39:18.226430   22606 start.go:364] duration metric: took 34.94µs to acquireMachinesLock for "ha-999305-m02"
	I0719 14:39:18.226452   22606 start.go:93] Provisioning new machine with config: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:39:18.226553   22606 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0719 14:39:18.228171   22606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 14:39:18.228260   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:18.228302   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:18.242856   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0719 14:39:18.243275   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:18.243721   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:18.243742   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:18.244015   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:18.244196   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:18.244345   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:18.244474   22606 start.go:159] libmachine.API.Create for "ha-999305" (driver="kvm2")
	I0719 14:39:18.244502   22606 client.go:168] LocalClient.Create starting
	I0719 14:39:18.244537   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:39:18.244578   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:39:18.244599   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:39:18.244670   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:39:18.244695   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:39:18.244709   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:39:18.244734   22606 main.go:141] libmachine: Running pre-create checks...
	I0719 14:39:18.244745   22606 main.go:141] libmachine: (ha-999305-m02) Calling .PreCreateCheck
	I0719 14:39:18.244894   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetConfigRaw
	I0719 14:39:18.245213   22606 main.go:141] libmachine: Creating machine...
	I0719 14:39:18.245226   22606 main.go:141] libmachine: (ha-999305-m02) Calling .Create
	I0719 14:39:18.245363   22606 main.go:141] libmachine: (ha-999305-m02) Creating KVM machine...
	I0719 14:39:18.246682   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found existing default KVM network
	I0719 14:39:18.246804   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found existing private KVM network mk-ha-999305
	I0719 14:39:18.246927   22606 main.go:141] libmachine: (ha-999305-m02) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02 ...
	I0719 14:39:18.246964   22606 main.go:141] libmachine: (ha-999305-m02) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:39:18.246981   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.246903   22975 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:39:18.247087   22606 main.go:141] libmachine: (ha-999305-m02) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:39:18.462228   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.462084   22975 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa...
	I0719 14:39:18.582334   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.582194   22975 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/ha-999305-m02.rawdisk...
	I0719 14:39:18.582373   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Writing magic tar header
	I0719 14:39:18.582387   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Writing SSH key tar header
	I0719 14:39:18.582403   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:18.582368   22975 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02 ...
	I0719 14:39:18.582536   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02
	I0719 14:39:18.582564   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:39:18.582577   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02 (perms=drwx------)
	I0719 14:39:18.582594   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:39:18.582620   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:39:18.582653   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:39:18.582672   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:39:18.582689   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:39:18.582704   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:39:18.582717   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:39:18.582731   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:39:18.582745   22606 main.go:141] libmachine: (ha-999305-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:39:18.582757   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Checking permissions on dir: /home
	I0719 14:39:18.582774   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Skipping /home - not owner
	I0719 14:39:18.582789   22606 main.go:141] libmachine: (ha-999305-m02) Creating domain...
	I0719 14:39:18.583621   22606 main.go:141] libmachine: (ha-999305-m02) define libvirt domain using xml: 
	I0719 14:39:18.583642   22606 main.go:141] libmachine: (ha-999305-m02) <domain type='kvm'>
	I0719 14:39:18.583657   22606 main.go:141] libmachine: (ha-999305-m02)   <name>ha-999305-m02</name>
	I0719 14:39:18.583665   22606 main.go:141] libmachine: (ha-999305-m02)   <memory unit='MiB'>2200</memory>
	I0719 14:39:18.583682   22606 main.go:141] libmachine: (ha-999305-m02)   <vcpu>2</vcpu>
	I0719 14:39:18.583688   22606 main.go:141] libmachine: (ha-999305-m02)   <features>
	I0719 14:39:18.583699   22606 main.go:141] libmachine: (ha-999305-m02)     <acpi/>
	I0719 14:39:18.583704   22606 main.go:141] libmachine: (ha-999305-m02)     <apic/>
	I0719 14:39:18.583711   22606 main.go:141] libmachine: (ha-999305-m02)     <pae/>
	I0719 14:39:18.583715   22606 main.go:141] libmachine: (ha-999305-m02)     
	I0719 14:39:18.583730   22606 main.go:141] libmachine: (ha-999305-m02)   </features>
	I0719 14:39:18.583738   22606 main.go:141] libmachine: (ha-999305-m02)   <cpu mode='host-passthrough'>
	I0719 14:39:18.583759   22606 main.go:141] libmachine: (ha-999305-m02)   
	I0719 14:39:18.583785   22606 main.go:141] libmachine: (ha-999305-m02)   </cpu>
	I0719 14:39:18.583795   22606 main.go:141] libmachine: (ha-999305-m02)   <os>
	I0719 14:39:18.583808   22606 main.go:141] libmachine: (ha-999305-m02)     <type>hvm</type>
	I0719 14:39:18.583822   22606 main.go:141] libmachine: (ha-999305-m02)     <boot dev='cdrom'/>
	I0719 14:39:18.583837   22606 main.go:141] libmachine: (ha-999305-m02)     <boot dev='hd'/>
	I0719 14:39:18.583850   22606 main.go:141] libmachine: (ha-999305-m02)     <bootmenu enable='no'/>
	I0719 14:39:18.583862   22606 main.go:141] libmachine: (ha-999305-m02)   </os>
	I0719 14:39:18.583873   22606 main.go:141] libmachine: (ha-999305-m02)   <devices>
	I0719 14:39:18.583886   22606 main.go:141] libmachine: (ha-999305-m02)     <disk type='file' device='cdrom'>
	I0719 14:39:18.583902   22606 main.go:141] libmachine: (ha-999305-m02)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/boot2docker.iso'/>
	I0719 14:39:18.583914   22606 main.go:141] libmachine: (ha-999305-m02)       <target dev='hdc' bus='scsi'/>
	I0719 14:39:18.583941   22606 main.go:141] libmachine: (ha-999305-m02)       <readonly/>
	I0719 14:39:18.583958   22606 main.go:141] libmachine: (ha-999305-m02)     </disk>
	I0719 14:39:18.583984   22606 main.go:141] libmachine: (ha-999305-m02)     <disk type='file' device='disk'>
	I0719 14:39:18.584003   22606 main.go:141] libmachine: (ha-999305-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:39:18.584027   22606 main.go:141] libmachine: (ha-999305-m02)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/ha-999305-m02.rawdisk'/>
	I0719 14:39:18.584036   22606 main.go:141] libmachine: (ha-999305-m02)       <target dev='hda' bus='virtio'/>
	I0719 14:39:18.584048   22606 main.go:141] libmachine: (ha-999305-m02)     </disk>
	I0719 14:39:18.584059   22606 main.go:141] libmachine: (ha-999305-m02)     <interface type='network'>
	I0719 14:39:18.584070   22606 main.go:141] libmachine: (ha-999305-m02)       <source network='mk-ha-999305'/>
	I0719 14:39:18.584083   22606 main.go:141] libmachine: (ha-999305-m02)       <model type='virtio'/>
	I0719 14:39:18.584093   22606 main.go:141] libmachine: (ha-999305-m02)     </interface>
	I0719 14:39:18.584104   22606 main.go:141] libmachine: (ha-999305-m02)     <interface type='network'>
	I0719 14:39:18.584117   22606 main.go:141] libmachine: (ha-999305-m02)       <source network='default'/>
	I0719 14:39:18.584128   22606 main.go:141] libmachine: (ha-999305-m02)       <model type='virtio'/>
	I0719 14:39:18.584140   22606 main.go:141] libmachine: (ha-999305-m02)     </interface>
	I0719 14:39:18.584150   22606 main.go:141] libmachine: (ha-999305-m02)     <serial type='pty'>
	I0719 14:39:18.584161   22606 main.go:141] libmachine: (ha-999305-m02)       <target port='0'/>
	I0719 14:39:18.584171   22606 main.go:141] libmachine: (ha-999305-m02)     </serial>
	I0719 14:39:18.584184   22606 main.go:141] libmachine: (ha-999305-m02)     <console type='pty'>
	I0719 14:39:18.584195   22606 main.go:141] libmachine: (ha-999305-m02)       <target type='serial' port='0'/>
	I0719 14:39:18.584205   22606 main.go:141] libmachine: (ha-999305-m02)     </console>
	I0719 14:39:18.584225   22606 main.go:141] libmachine: (ha-999305-m02)     <rng model='virtio'>
	I0719 14:39:18.584239   22606 main.go:141] libmachine: (ha-999305-m02)       <backend model='random'>/dev/random</backend>
	I0719 14:39:18.584250   22606 main.go:141] libmachine: (ha-999305-m02)     </rng>
	I0719 14:39:18.584261   22606 main.go:141] libmachine: (ha-999305-m02)     
	I0719 14:39:18.584268   22606 main.go:141] libmachine: (ha-999305-m02)     
	I0719 14:39:18.584278   22606 main.go:141] libmachine: (ha-999305-m02)   </devices>
	I0719 14:39:18.584292   22606 main.go:141] libmachine: (ha-999305-m02) </domain>
	I0719 14:39:18.584306   22606 main.go:141] libmachine: (ha-999305-m02) 
	I0719 14:39:18.590654   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:73:3f:f4 in network default
	I0719 14:39:18.591159   22606 main.go:141] libmachine: (ha-999305-m02) Ensuring networks are active...
	I0719 14:39:18.591192   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:18.591867   22606 main.go:141] libmachine: (ha-999305-m02) Ensuring network default is active
	I0719 14:39:18.592066   22606 main.go:141] libmachine: (ha-999305-m02) Ensuring network mk-ha-999305 is active
	I0719 14:39:18.592371   22606 main.go:141] libmachine: (ha-999305-m02) Getting domain xml...
	I0719 14:39:18.593040   22606 main.go:141] libmachine: (ha-999305-m02) Creating domain...
	I0719 14:39:19.829028   22606 main.go:141] libmachine: (ha-999305-m02) Waiting to get IP...
	I0719 14:39:19.829882   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:19.830355   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:19.830379   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:19.830335   22975 retry.go:31] will retry after 232.698136ms: waiting for machine to come up
	I0719 14:39:20.064451   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:20.064879   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:20.064906   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:20.064838   22975 retry.go:31] will retry after 300.649663ms: waiting for machine to come up
	I0719 14:39:20.367477   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:20.367880   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:20.367900   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:20.367837   22975 retry.go:31] will retry after 308.173928ms: waiting for machine to come up
	I0719 14:39:20.677371   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:20.677828   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:20.677883   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:20.677813   22975 retry.go:31] will retry after 527.141479ms: waiting for machine to come up
	I0719 14:39:21.206519   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:21.207014   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:21.207044   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:21.206975   22975 retry.go:31] will retry after 527.998334ms: waiting for machine to come up
	I0719 14:39:21.736776   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:21.737213   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:21.737243   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:21.737166   22975 retry.go:31] will retry after 825.77254ms: waiting for machine to come up
	I0719 14:39:22.564616   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:22.565026   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:22.565064   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:22.565001   22975 retry.go:31] will retry after 909.482551ms: waiting for machine to come up
	I0719 14:39:23.475812   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:23.476310   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:23.476335   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:23.476264   22975 retry.go:31] will retry after 1.114340427s: waiting for machine to come up
	I0719 14:39:24.592057   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:24.592483   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:24.592513   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:24.592436   22975 retry.go:31] will retry after 1.413057812s: waiting for machine to come up
	I0719 14:39:26.007232   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:26.007705   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:26.007731   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:26.007654   22975 retry.go:31] will retry after 1.543069671s: waiting for machine to come up
	I0719 14:39:27.554873   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:27.555323   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:27.555346   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:27.555276   22975 retry.go:31] will retry after 2.033378244s: waiting for machine to come up
	I0719 14:39:29.589995   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:29.590403   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:29.590424   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:29.590384   22975 retry.go:31] will retry after 2.879562841s: waiting for machine to come up
	I0719 14:39:32.472168   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:32.472585   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:32.472608   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:32.472542   22975 retry.go:31] will retry after 4.312500232s: waiting for machine to come up
	I0719 14:39:36.787365   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:36.787784   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find current IP address of domain ha-999305-m02 in network mk-ha-999305
	I0719 14:39:36.787811   22606 main.go:141] libmachine: (ha-999305-m02) DBG | I0719 14:39:36.787737   22975 retry.go:31] will retry after 3.923983309s: waiting for machine to come up
	I0719 14:39:40.715144   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.715607   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has current primary IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.715628   22606 main.go:141] libmachine: (ha-999305-m02) Found IP for machine: 192.168.39.163
	I0719 14:39:40.715642   22606 main.go:141] libmachine: (ha-999305-m02) Reserving static IP address...
	I0719 14:39:40.716060   22606 main.go:141] libmachine: (ha-999305-m02) DBG | unable to find host DHCP lease matching {name: "ha-999305-m02", mac: "52:54:00:8f:f6:ba", ip: "192.168.39.163"} in network mk-ha-999305
	I0719 14:39:40.788615   22606 main.go:141] libmachine: (ha-999305-m02) Reserved static IP address: 192.168.39.163
	I0719 14:39:40.788635   22606 main.go:141] libmachine: (ha-999305-m02) Waiting for SSH to be available...
	I0719 14:39:40.788681   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Getting to WaitForSSH function...
	I0719 14:39:40.791139   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.791475   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:40.791512   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.791680   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Using SSH client type: external
	I0719 14:39:40.791704   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa (-rw-------)
	I0719 14:39:40.791741   22606 main.go:141] libmachine: (ha-999305-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.163 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:39:40.791761   22606 main.go:141] libmachine: (ha-999305-m02) DBG | About to run SSH command:
	I0719 14:39:40.791777   22606 main.go:141] libmachine: (ha-999305-m02) DBG | exit 0
	I0719 14:39:40.918300   22606 main.go:141] libmachine: (ha-999305-m02) DBG | SSH cmd err, output: <nil>: 
	I0719 14:39:40.918591   22606 main.go:141] libmachine: (ha-999305-m02) KVM machine creation complete!
	I0719 14:39:40.918873   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetConfigRaw
	I0719 14:39:40.919396   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:40.919576   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:40.919704   22606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:39:40.919716   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:39:40.920802   22606 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:39:40.920815   22606 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:39:40.920820   22606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:39:40.920826   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:40.923013   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.923413   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:40.923439   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:40.923580   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:40.923743   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:40.923927   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:40.924068   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:40.924216   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:40.924432   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:40.924443   22606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:39:41.029539   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:39:41.029567   22606 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:39:41.029575   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.032271   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.032675   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.032707   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.032819   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.033018   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.033183   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.033323   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.033523   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.033741   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.033753   22606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:39:41.143037   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:39:41.143114   22606 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:39:41.143122   22606 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:39:41.143129   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:41.143364   22606 buildroot.go:166] provisioning hostname "ha-999305-m02"
	I0719 14:39:41.143395   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:41.143602   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.146274   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.146679   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.146706   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.146828   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.147002   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.147152   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.147380   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.147593   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.147762   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.147774   22606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305-m02 && echo "ha-999305-m02" | sudo tee /etc/hostname
	I0719 14:39:41.271088   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305-m02
	
	I0719 14:39:41.271113   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.273393   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.273735   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.273763   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.273881   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.274075   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.274262   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.274414   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.274582   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.274803   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.274825   22606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:39:41.392552   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:39:41.392580   22606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:39:41.392614   22606 buildroot.go:174] setting up certificates
	I0719 14:39:41.392628   22606 provision.go:84] configureAuth start
	I0719 14:39:41.392644   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetMachineName
	I0719 14:39:41.392952   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:41.395461   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.395808   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.395828   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.396000   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.398076   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.398391   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.398419   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.398642   22606 provision.go:143] copyHostCerts
	I0719 14:39:41.398682   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:39:41.398714   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:39:41.398727   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:39:41.398801   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:39:41.398901   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:39:41.398926   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:39:41.398933   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:39:41.398975   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:39:41.399049   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:39:41.399073   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:39:41.399081   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:39:41.399114   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:39:41.399226   22606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305-m02 san=[127.0.0.1 192.168.39.163 ha-999305-m02 localhost minikube]
	I0719 14:39:41.663891   22606 provision.go:177] copyRemoteCerts
	I0719 14:39:41.663946   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:39:41.663969   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.667045   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.667368   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.667393   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.667560   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.667874   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.668026   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.668146   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:41.752370   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:39:41.752452   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:39:41.777595   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:39:41.777667   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 14:39:41.802078   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:39:41.802148   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 14:39:41.826484   22606 provision.go:87] duration metric: took 433.840369ms to configureAuth
	I0719 14:39:41.826518   22606 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:39:41.826762   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:41.826859   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:41.829745   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.830121   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:41.830145   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:41.830403   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:41.830600   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.830761   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:41.830889   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:41.831041   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:41.831244   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:41.831267   22606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:39:42.124350   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:39:42.124380   22606 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:39:42.124390   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetURL
	I0719 14:39:42.125797   22606 main.go:141] libmachine: (ha-999305-m02) DBG | Using libvirt version 6000000
	I0719 14:39:42.128127   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.128492   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.128525   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.128714   22606 main.go:141] libmachine: Docker is up and running!
	I0719 14:39:42.128728   22606 main.go:141] libmachine: Reticulating splines...
	I0719 14:39:42.128735   22606 client.go:171] duration metric: took 23.884223467s to LocalClient.Create
	I0719 14:39:42.128765   22606 start.go:167] duration metric: took 23.884290639s to libmachine.API.Create "ha-999305"
	I0719 14:39:42.128777   22606 start.go:293] postStartSetup for "ha-999305-m02" (driver="kvm2")
	I0719 14:39:42.128793   22606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:39:42.128820   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.129042   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:39:42.129067   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:42.131400   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.131724   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.131748   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.131888   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.132046   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.132211   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.132317   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:42.216466   22606 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:39:42.220784   22606 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:39:42.220805   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:39:42.220876   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:39:42.220973   22606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:39:42.220986   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:39:42.221067   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:39:42.230716   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:39:42.255478   22606 start.go:296] duration metric: took 126.686327ms for postStartSetup
	I0719 14:39:42.255536   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetConfigRaw
	I0719 14:39:42.256145   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:42.258614   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.258911   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.258939   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.259138   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:39:42.259341   22606 start.go:128] duration metric: took 24.032774788s to createHost
	I0719 14:39:42.259366   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:42.261488   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.261759   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.261787   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.261944   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.262103   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.262254   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.262482   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.262665   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:39:42.262832   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.163 22 <nil> <nil>}
	I0719 14:39:42.262842   22606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:39:42.371100   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399982.329282609
	
	I0719 14:39:42.371123   22606 fix.go:216] guest clock: 1721399982.329282609
	I0719 14:39:42.371130   22606 fix.go:229] Guest: 2024-07-19 14:39:42.329282609 +0000 UTC Remote: 2024-07-19 14:39:42.25935438 +0000 UTC m=+74.526865486 (delta=69.928229ms)
	I0719 14:39:42.371144   22606 fix.go:200] guest clock delta is within tolerance: 69.928229ms
	I0719 14:39:42.371149   22606 start.go:83] releasing machines lock for "ha-999305-m02", held for 24.144708393s
	I0719 14:39:42.371165   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.371446   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:42.373953   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.374337   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.374365   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.376592   22606 out.go:177] * Found network options:
	I0719 14:39:42.377929   22606 out.go:177]   - NO_PROXY=192.168.39.240
	W0719 14:39:42.379182   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:39:42.379207   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.379764   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.379951   22606 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:39:42.380040   22606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:39:42.380080   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	W0719 14:39:42.380168   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:39:42.380250   22606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:39:42.380271   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:39:42.382746   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383077   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.383105   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383124   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383246   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.383403   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.383546   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.383567   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:42.383594   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:42.383704   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:42.383801   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:39:42.383945   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:39:42.384056   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:39:42.384149   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:39:42.617964   22606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:39:42.624485   22606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:39:42.624540   22606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:39:42.641218   22606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:39:42.641245   22606 start.go:495] detecting cgroup driver to use...
	I0719 14:39:42.641305   22606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:39:42.657487   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:39:42.671672   22606 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:39:42.671723   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:39:42.685181   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:39:42.698537   22606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:39:42.807279   22606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:39:42.943615   22606 docker.go:233] disabling docker service ...
	I0719 14:39:42.943675   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:39:42.958350   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:39:42.971339   22606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:39:43.105839   22606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:39:43.223091   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:39:43.236680   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:39:43.254975   22606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:39:43.255040   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.266905   22606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:39:43.266971   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.279094   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.289791   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.302548   22606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:39:43.314907   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.325554   22606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.344159   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:39:43.354516   22606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:39:43.363895   22606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:39:43.363948   22606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:39:43.377079   22606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:39:43.386342   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:39:43.492892   22606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:39:43.641929   22606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:39:43.642003   22606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:39:43.646608   22606 start.go:563] Will wait 60s for crictl version
	I0719 14:39:43.646664   22606 ssh_runner.go:195] Run: which crictl
	I0719 14:39:43.650279   22606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:39:43.688012   22606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:39:43.688095   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:39:43.716291   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:39:43.747334   22606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:39:43.748971   22606 out.go:177]   - env NO_PROXY=192.168.39.240
	I0719 14:39:43.750208   22606 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:39:43.752887   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:43.753298   22606 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:39:32 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:39:43.753325   22606 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:39:43.753544   22606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:39:43.758044   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:39:43.771952   22606 mustload.go:65] Loading cluster: ha-999305
	I0719 14:39:43.772130   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:39:43.772368   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:43.772394   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:43.786872   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0719 14:39:43.787335   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:43.787813   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:43.787831   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:43.788110   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:43.788295   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:39:43.789897   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:39:43.790172   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:39:43.790209   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:39:43.804706   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
	I0719 14:39:43.805093   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:39:43.805583   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:39:43.805608   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:39:43.805947   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:39:43.806137   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:39:43.806322   22606 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.163
	I0719 14:39:43.806332   22606 certs.go:194] generating shared ca certs ...
	I0719 14:39:43.806344   22606 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:43.806462   22606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:39:43.806495   22606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:39:43.806503   22606 certs.go:256] generating profile certs ...
	I0719 14:39:43.806564   22606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:39:43.806587   22606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b
	I0719 14:39:43.806605   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.163 192.168.39.254]
	I0719 14:39:43.984627   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b ...
	I0719 14:39:43.984656   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b: {Name:mk2a0b1ad7bc80f20dada6c6b7ae3f4c0d7ba80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:43.984811   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b ...
	I0719 14:39:43.984822   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b: {Name:mk23808245a07f43c7c3d40d12ace7cf9ae36ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:39:43.984890   22606 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.a31cef5b -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:39:43.985019   22606 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.a31cef5b -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:39:43.985137   22606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:39:43.985151   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:39:43.985163   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:39:43.985176   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:39:43.985188   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:39:43.985200   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:39:43.985213   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:39:43.985225   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:39:43.985236   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:39:43.985281   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:39:43.985307   22606 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:39:43.985317   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:39:43.985343   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:39:43.985364   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:39:43.985384   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:39:43.985418   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:39:43.985444   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:43.985457   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:39:43.985470   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:39:43.985500   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:39:43.988477   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:43.988943   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:39:43.988975   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:39:43.989097   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:39:43.989285   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:39:43.989438   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:39:43.989564   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:39:44.062643   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 14:39:44.067970   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 14:39:44.079368   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 14:39:44.083652   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 14:39:44.098386   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 14:39:44.102994   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 14:39:44.114292   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 14:39:44.118673   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 14:39:44.129653   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 14:39:44.133668   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 14:39:44.144876   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 14:39:44.150850   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 14:39:44.162619   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:39:44.187791   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:39:44.211834   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:39:44.235247   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:39:44.259337   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 14:39:44.283869   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 14:39:44.308070   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:39:44.331461   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:39:44.354997   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:39:44.379318   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:39:44.403152   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:39:44.427474   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 14:39:44.444233   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 14:39:44.460690   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 14:39:44.477088   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 14:39:44.493773   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 14:39:44.511155   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 14:39:44.528189   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 14:39:44.544925   22606 ssh_runner.go:195] Run: openssl version
	I0719 14:39:44.550673   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:39:44.562009   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:39:44.566717   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:39:44.566785   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:39:44.572524   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:39:44.582943   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:39:44.593097   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:44.597429   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:44.597473   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:39:44.602955   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:39:44.613681   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:39:44.624201   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:39:44.628353   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:39:44.628396   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:39:44.633697   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:39:44.643716   22606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:39:44.647535   22606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:39:44.647624   22606 kubeadm.go:934] updating node {m02 192.168.39.163 8443 v1.30.3 crio true true} ...
	I0719 14:39:44.647711   22606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.163
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:39:44.647744   22606 kube-vip.go:115] generating kube-vip config ...
	I0719 14:39:44.647780   22606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:39:44.664906   22606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:39:44.665032   22606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:39:44.665094   22606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:39:44.674426   22606 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 14:39:44.674477   22606 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 14:39:44.683529   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 14:39:44.683558   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:39:44.683593   22606 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0719 14:39:44.683614   22606 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0719 14:39:44.683625   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:39:44.687965   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 14:39:44.687997   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 14:40:18.526029   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:40:18.526122   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:40:18.531072   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 14:40:18.531099   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 14:40:55.985114   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:40:56.001664   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:40:56.001784   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:40:56.006456   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 14:40:56.006485   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 14:40:56.398539   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 14:40:56.408289   22606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 14:40:56.424633   22606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:40:56.440389   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 14:40:56.456320   22606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:40:56.460249   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:40:56.471499   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:40:56.585388   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:40:56.601533   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:40:56.601886   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:40:56.601928   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:40:56.616473   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33295
	I0719 14:40:56.616930   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:40:56.617408   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:40:56.617424   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:40:56.617720   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:40:56.617898   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:40:56.618071   22606 start.go:317] joinCluster: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:40:56.618164   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 14:40:56.618185   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:40:56.621208   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:40:56.621621   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:40:56.621653   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:40:56.621819   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:40:56.622005   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:40:56.622158   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:40:56.622332   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:40:56.782875   22606 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:40:56.782916   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t5x36i.g04hbzpy1n0k6w3r --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m02 --control-plane --apiserver-advertise-address=192.168.39.163 --apiserver-bind-port=8443"
	I0719 14:41:19.066465   22606 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t5x36i.g04hbzpy1n0k6w3r --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m02 --control-plane --apiserver-advertise-address=192.168.39.163 --apiserver-bind-port=8443": (22.283523172s)
	I0719 14:41:19.066505   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 14:41:19.639495   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-999305-m02 minikube.k8s.io/updated_at=2024_07_19T14_41_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=ha-999305 minikube.k8s.io/primary=false
	I0719 14:41:19.784944   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-999305-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 14:41:19.897959   22606 start.go:319] duration metric: took 23.279884364s to joinCluster
	I0719 14:41:19.898032   22606 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:41:19.898338   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:41:19.899533   22606 out.go:177] * Verifying Kubernetes components...
	I0719 14:41:19.900743   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:41:20.187486   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:41:20.233898   22606 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:41:20.234135   22606 kapi.go:59] client config for ha-999305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 14:41:20.234217   22606 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0719 14:41:20.234484   22606 node_ready.go:35] waiting up to 6m0s for node "ha-999305-m02" to be "Ready" ...
	I0719 14:41:20.234586   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:20.234597   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:20.234608   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:20.234612   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:20.245032   22606 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 14:41:20.735045   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:20.735065   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:20.735073   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:20.735077   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:20.739760   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:21.235443   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:21.235475   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:21.235482   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:21.235489   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:21.238391   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:21.735486   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:21.735506   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:21.735514   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:21.735519   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:21.738581   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:22.235601   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:22.235623   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:22.235631   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:22.235634   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:22.239096   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:22.239796   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:22.735141   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:22.735167   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:22.735177   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:22.735182   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:22.738655   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:23.235387   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:23.235409   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:23.235421   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:23.235425   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:23.239298   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:23.734970   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:23.734992   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:23.735002   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:23.735007   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:23.738576   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:24.235569   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:24.235594   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:24.235606   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:24.235611   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:24.239367   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:24.240102   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:24.735498   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:24.735517   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:24.735525   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:24.735529   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:24.739607   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:25.235456   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:25.235478   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:25.235486   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:25.235491   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:25.238496   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:25.734676   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:25.734702   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:25.734714   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:25.734720   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:25.737811   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:26.234786   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:26.234812   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:26.234824   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:26.234829   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:26.238309   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:26.734665   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:26.734690   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:26.734699   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:26.734707   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:26.744183   22606 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 14:41:26.745230   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:27.235583   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:27.235604   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:27.235611   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:27.235614   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:27.238654   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:27.734757   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:27.734777   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:27.734784   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:27.734788   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:27.738092   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:28.235017   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:28.235046   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:28.235057   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:28.235065   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:28.238698   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:28.735413   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:28.735437   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:28.735448   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:28.735455   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:28.739308   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:29.235460   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:29.235482   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:29.235489   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:29.235493   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:29.238787   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:29.239611   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:29.734920   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:29.734940   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:29.734947   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:29.734951   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:29.738339   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:30.235322   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:30.235344   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:30.235353   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:30.235357   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:30.239241   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:30.735462   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:30.735486   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:30.735496   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:30.735500   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:30.738712   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:31.234839   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:31.234857   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:31.234865   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:31.234868   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:31.237706   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:31.735364   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:31.735385   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:31.735395   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:31.735400   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:31.738337   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:31.738925   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:32.235153   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:32.235176   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:32.235186   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:32.235192   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:32.238054   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:32.735287   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:32.735312   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:32.735322   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:32.735327   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:32.739030   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:33.235500   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:33.235528   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:33.235540   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:33.235547   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:33.239031   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:33.735435   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:33.735458   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:33.735469   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:33.735475   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:33.738128   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:34.234952   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:34.234975   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:34.234983   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:34.234988   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:34.238593   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:34.239210   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:34.735472   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:34.735490   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:34.735499   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:34.735502   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:34.738766   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:35.235517   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:35.235543   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:35.235556   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:35.235561   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:35.238716   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:35.734746   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:35.734765   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:35.734773   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:35.734777   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:35.738036   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:36.235477   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:36.235502   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:36.235512   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:36.235517   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:36.238926   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:36.239395   22606 node_ready.go:53] node "ha-999305-m02" has status "Ready":"False"
	I0719 14:41:36.735200   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:36.735222   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:36.735232   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:36.735238   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:36.739765   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:37.234858   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:37.234881   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:37.234892   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:37.234898   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:37.238254   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:37.735460   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:37.735482   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:37.735490   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:37.735494   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:37.739155   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.234873   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:38.234900   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.234913   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.234917   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.238547   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.239036   22606 node_ready.go:49] node "ha-999305-m02" has status "Ready":"True"
	I0719 14:41:38.239054   22606 node_ready.go:38] duration metric: took 18.004552949s for node "ha-999305-m02" to be "Ready" ...
	I0719 14:41:38.239062   22606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:41:38.239118   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:38.239126   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.239132   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.239138   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.244137   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:38.250129   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.250192   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9sxgr
	I0719 14:41:38.250200   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.250207   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.250210   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.253366   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.254466   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.254487   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.254494   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.254498   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.257356   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.258028   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.258049   22606 pod_ready.go:81] duration metric: took 7.899929ms for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.258060   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.258118   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gtwxd
	I0719 14:41:38.258129   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.258138   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.258147   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.261231   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.262263   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.262278   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.262287   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.262291   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.265036   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.265950   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.265968   22606 pod_ready.go:81] duration metric: took 7.899503ms for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.265977   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.266020   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305
	I0719 14:41:38.266027   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.266033   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.266038   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.268403   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.268981   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.268997   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.269004   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.269007   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.271168   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.271583   22606 pod_ready.go:92] pod "etcd-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.271596   22606 pod_ready.go:81] duration metric: took 5.613301ms for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.271604   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.271660   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m02
	I0719 14:41:38.271670   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.271677   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.271681   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.274267   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.274894   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:38.274909   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.274919   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.274926   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.277928   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:38.278559   22606 pod_ready.go:92] pod "etcd-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.278575   22606 pod_ready.go:81] duration metric: took 6.965386ms for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.278591   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.434872   22606 request.go:629] Waited for 156.22314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:41:38.434943   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:41:38.434950   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.434960   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.434967   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.438021   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.635381   22606 request.go:629] Waited for 196.400511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.635437   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:38.635444   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.635454   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.635462   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.638941   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:38.639629   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:38.639647   22606 pod_ready.go:81] duration metric: took 361.04492ms for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.639656   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:38.834874   22606 request.go:629] Waited for 195.138261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:41:38.834965   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:41:38.834977   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:38.834988   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:38.834997   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:38.838012   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:39.035005   22606 request.go:629] Waited for 196.286103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.035081   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.035092   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.035108   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.035116   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.039251   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:39.039819   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:39.039837   22606 pod_ready.go:81] duration metric: took 400.173919ms for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.039851   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.235914   22606 request.go:629] Waited for 195.992688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:41:39.236002   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:41:39.236010   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.236021   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.236029   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.239055   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:39.434982   22606 request.go:629] Waited for 195.302459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:39.435071   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:39.435081   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.435094   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.435103   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.438520   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:39.439039   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:39.439062   22606 pod_ready.go:81] duration metric: took 399.203191ms for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.439075   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.635186   22606 request.go:629] Waited for 196.027799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:41:39.635251   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:41:39.635258   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.635269   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.635273   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.638441   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:39.835647   22606 request.go:629] Waited for 196.392371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.835732   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:39.835741   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:39.835748   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:39.835752   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:39.838529   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:39.839077   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:39.839097   22606 pod_ready.go:81] duration metric: took 400.012031ms for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:39.839109   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.035136   22606 request.go:629] Waited for 195.963436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:41:40.035199   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:41:40.035205   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.035213   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.035217   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.038436   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:40.235700   22606 request.go:629] Waited for 196.338225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:40.235748   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:40.235753   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.235760   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.235766   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.240036   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:41:40.240449   22606 pod_ready.go:92] pod "kube-proxy-766sx" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:40.240466   22606 pod_ready.go:81] duration metric: took 401.349631ms for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.240474   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.435698   22606 request.go:629] Waited for 195.163815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:41:40.435796   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:41:40.435807   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.435818   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.435826   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.439801   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:40.634947   22606 request.go:629] Waited for 194.275452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:40.635036   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:40.635047   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.635058   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.635068   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.638020   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:40.638718   22606 pod_ready.go:92] pod "kube-proxy-s2wb7" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:40.638741   22606 pod_ready.go:81] duration metric: took 398.258211ms for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.638753   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:40.835797   22606 request.go:629] Waited for 196.967578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:41:40.835861   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:41:40.835868   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:40.835878   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:40.835898   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:40.839212   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:41.035378   22606 request.go:629] Waited for 195.341022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:41.035430   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:41:41.035437   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.035447   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.035458   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.038664   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:41.039195   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:41.039211   22606 pod_ready.go:81] duration metric: took 400.451796ms for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:41.039219   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:41.235499   22606 request.go:629] Waited for 196.192704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:41:41.235566   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:41:41.235576   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.235588   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.235595   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.238457   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:41.435372   22606 request.go:629] Waited for 196.342868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:41.435439   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:41:41.435446   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.435453   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.435458   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.439187   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:41.439914   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:41:41.439928   22606 pod_ready.go:81] duration metric: took 400.703094ms for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:41:41.439938   22606 pod_ready.go:38] duration metric: took 3.200865668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:41:41.439951   22606 api_server.go:52] waiting for apiserver process to appear ...
	I0719 14:41:41.439996   22606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:41:41.456138   22606 api_server.go:72] duration metric: took 21.558072267s to wait for apiserver process to appear ...
	I0719 14:41:41.456162   22606 api_server.go:88] waiting for apiserver healthz status ...
	I0719 14:41:41.456180   22606 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0719 14:41:41.460620   22606 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0719 14:41:41.460681   22606 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0719 14:41:41.460691   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.460702   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.460707   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.461594   22606 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 14:41:41.461708   22606 api_server.go:141] control plane version: v1.30.3
	I0719 14:41:41.461726   22606 api_server.go:131] duration metric: took 5.557821ms to wait for apiserver health ...
	I0719 14:41:41.461734   22606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 14:41:41.635397   22606 request.go:629] Waited for 173.600025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:41.635453   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:41.635468   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.635475   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.635480   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.641142   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:41:41.646068   22606 system_pods.go:59] 17 kube-system pods found
	I0719 14:41:41.646092   22606 system_pods.go:61] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:41:41.646097   22606 system_pods.go:61] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:41:41.646101   22606 system_pods.go:61] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:41:41.646105   22606 system_pods.go:61] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:41:41.646108   22606 system_pods.go:61] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:41:41.646111   22606 system_pods.go:61] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:41:41.646115   22606 system_pods.go:61] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:41:41.646118   22606 system_pods.go:61] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:41:41.646121   22606 system_pods.go:61] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:41:41.646124   22606 system_pods.go:61] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:41:41.646127   22606 system_pods.go:61] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:41:41.646133   22606 system_pods.go:61] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:41:41.646138   22606 system_pods.go:61] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:41:41.646143   22606 system_pods.go:61] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:41:41.646145   22606 system_pods.go:61] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:41:41.646148   22606 system_pods.go:61] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:41:41.646151   22606 system_pods.go:61] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:41:41.646157   22606 system_pods.go:74] duration metric: took 184.414006ms to wait for pod list to return data ...
	I0719 14:41:41.646165   22606 default_sa.go:34] waiting for default service account to be created ...
	I0719 14:41:41.835646   22606 request.go:629] Waited for 189.422487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:41:41.835701   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:41:41.835707   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:41.835712   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:41.835716   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:41.838704   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:41:41.838951   22606 default_sa.go:45] found service account: "default"
	I0719 14:41:41.838973   22606 default_sa.go:55] duration metric: took 192.800827ms for default service account to be created ...
	I0719 14:41:41.838984   22606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 14:41:42.035198   22606 request.go:629] Waited for 196.150627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:42.035275   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:41:42.035284   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:42.035292   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:42.035297   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:42.040481   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:41:42.045064   22606 system_pods.go:86] 17 kube-system pods found
	I0719 14:41:42.045086   22606 system_pods.go:89] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:41:42.045091   22606 system_pods.go:89] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:41:42.045095   22606 system_pods.go:89] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:41:42.045099   22606 system_pods.go:89] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:41:42.045103   22606 system_pods.go:89] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:41:42.045108   22606 system_pods.go:89] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:41:42.045114   22606 system_pods.go:89] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:41:42.045119   22606 system_pods.go:89] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:41:42.045129   22606 system_pods.go:89] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:41:42.045135   22606 system_pods.go:89] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:41:42.045144   22606 system_pods.go:89] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:41:42.045150   22606 system_pods.go:89] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:41:42.045156   22606 system_pods.go:89] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:41:42.045163   22606 system_pods.go:89] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:41:42.045167   22606 system_pods.go:89] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:41:42.045171   22606 system_pods.go:89] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:41:42.045176   22606 system_pods.go:89] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:41:42.045183   22606 system_pods.go:126] duration metric: took 206.193234ms to wait for k8s-apps to be running ...
	I0719 14:41:42.045192   22606 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 14:41:42.045247   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:41:42.061913   22606 system_svc.go:56] duration metric: took 16.713203ms WaitForService to wait for kubelet
	I0719 14:41:42.061937   22606 kubeadm.go:582] duration metric: took 22.163877158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:41:42.061956   22606 node_conditions.go:102] verifying NodePressure condition ...
	I0719 14:41:42.235374   22606 request.go:629] Waited for 173.359298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0719 14:41:42.235445   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0719 14:41:42.235452   22606 round_trippers.go:469] Request Headers:
	I0719 14:41:42.235462   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:41:42.235468   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:41:42.239088   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:41:42.240094   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:41:42.240129   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:41:42.240143   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:41:42.240148   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:41:42.240154   22606 node_conditions.go:105] duration metric: took 178.193733ms to run NodePressure ...
	I0719 14:41:42.240175   22606 start.go:241] waiting for startup goroutines ...
	I0719 14:41:42.240205   22606 start.go:255] writing updated cluster config ...
	I0719 14:41:42.242285   22606 out.go:177] 
	I0719 14:41:42.243680   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:41:42.243768   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:41:42.245236   22606 out.go:177] * Starting "ha-999305-m03" control-plane node in "ha-999305" cluster
	I0719 14:41:42.246373   22606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:41:42.246397   22606 cache.go:56] Caching tarball of preloaded images
	I0719 14:41:42.246500   22606 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:41:42.246510   22606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:41:42.246584   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:41:42.246726   22606 start.go:360] acquireMachinesLock for ha-999305-m03: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:41:42.246762   22606 start.go:364] duration metric: took 20.176µs to acquireMachinesLock for "ha-999305-m03"
	I0719 14:41:42.246777   22606 start.go:93] Provisioning new machine with config: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:41:42.246868   22606 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0719 14:41:42.248167   22606 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 14:41:42.248230   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:41:42.248257   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:41:42.265009   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37731
	I0719 14:41:42.265465   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:41:42.266011   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:41:42.266037   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:41:42.266401   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:41:42.266599   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:41:42.266789   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:41:42.266971   22606 start.go:159] libmachine.API.Create for "ha-999305" (driver="kvm2")
	I0719 14:41:42.267000   22606 client.go:168] LocalClient.Create starting
	I0719 14:41:42.267036   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 14:41:42.267079   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:41:42.267101   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:41:42.267192   22606 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 14:41:42.267222   22606 main.go:141] libmachine: Decoding PEM data...
	I0719 14:41:42.267241   22606 main.go:141] libmachine: Parsing certificate...
	I0719 14:41:42.267269   22606 main.go:141] libmachine: Running pre-create checks...
	I0719 14:41:42.267280   22606 main.go:141] libmachine: (ha-999305-m03) Calling .PreCreateCheck
	I0719 14:41:42.267533   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetConfigRaw
	I0719 14:41:42.267945   22606 main.go:141] libmachine: Creating machine...
	I0719 14:41:42.267958   22606 main.go:141] libmachine: (ha-999305-m03) Calling .Create
	I0719 14:41:42.268123   22606 main.go:141] libmachine: (ha-999305-m03) Creating KVM machine...
	I0719 14:41:42.269508   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found existing default KVM network
	I0719 14:41:42.269686   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found existing private KVM network mk-ha-999305
	I0719 14:41:42.269934   22606 main.go:141] libmachine: (ha-999305-m03) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03 ...
	I0719 14:41:42.269959   22606 main.go:141] libmachine: (ha-999305-m03) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:41:42.270050   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.269929   23632 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:41:42.270149   22606 main.go:141] libmachine: (ha-999305-m03) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 14:41:42.492863   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.492735   23632 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa...
	I0719 14:41:42.536529   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.536434   23632 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/ha-999305-m03.rawdisk...
	I0719 14:41:42.536555   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Writing magic tar header
	I0719 14:41:42.536567   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Writing SSH key tar header
	I0719 14:41:42.536677   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:42.536582   23632 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03 ...
	I0719 14:41:42.536705   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03 (perms=drwx------)
	I0719 14:41:42.536712   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03
	I0719 14:41:42.536743   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 14:41:42.536767   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 14:41:42.536779   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:41:42.536788   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 14:41:42.536798   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 14:41:42.536813   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 14:41:42.536823   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home/jenkins
	I0719 14:41:42.536835   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Checking permissions on dir: /home
	I0719 14:41:42.536847   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Skipping /home - not owner
	I0719 14:41:42.536860   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 14:41:42.536873   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 14:41:42.536884   22606 main.go:141] libmachine: (ha-999305-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 14:41:42.536892   22606 main.go:141] libmachine: (ha-999305-m03) Creating domain...
	I0719 14:41:42.537776   22606 main.go:141] libmachine: (ha-999305-m03) define libvirt domain using xml: 
	I0719 14:41:42.537798   22606 main.go:141] libmachine: (ha-999305-m03) <domain type='kvm'>
	I0719 14:41:42.537810   22606 main.go:141] libmachine: (ha-999305-m03)   <name>ha-999305-m03</name>
	I0719 14:41:42.537822   22606 main.go:141] libmachine: (ha-999305-m03)   <memory unit='MiB'>2200</memory>
	I0719 14:41:42.537845   22606 main.go:141] libmachine: (ha-999305-m03)   <vcpu>2</vcpu>
	I0719 14:41:42.537866   22606 main.go:141] libmachine: (ha-999305-m03)   <features>
	I0719 14:41:42.537876   22606 main.go:141] libmachine: (ha-999305-m03)     <acpi/>
	I0719 14:41:42.537886   22606 main.go:141] libmachine: (ha-999305-m03)     <apic/>
	I0719 14:41:42.537896   22606 main.go:141] libmachine: (ha-999305-m03)     <pae/>
	I0719 14:41:42.537901   22606 main.go:141] libmachine: (ha-999305-m03)     
	I0719 14:41:42.537909   22606 main.go:141] libmachine: (ha-999305-m03)   </features>
	I0719 14:41:42.537914   22606 main.go:141] libmachine: (ha-999305-m03)   <cpu mode='host-passthrough'>
	I0719 14:41:42.537920   22606 main.go:141] libmachine: (ha-999305-m03)   
	I0719 14:41:42.537925   22606 main.go:141] libmachine: (ha-999305-m03)   </cpu>
	I0719 14:41:42.537931   22606 main.go:141] libmachine: (ha-999305-m03)   <os>
	I0719 14:41:42.537945   22606 main.go:141] libmachine: (ha-999305-m03)     <type>hvm</type>
	I0719 14:41:42.537958   22606 main.go:141] libmachine: (ha-999305-m03)     <boot dev='cdrom'/>
	I0719 14:41:42.537968   22606 main.go:141] libmachine: (ha-999305-m03)     <boot dev='hd'/>
	I0719 14:41:42.537981   22606 main.go:141] libmachine: (ha-999305-m03)     <bootmenu enable='no'/>
	I0719 14:41:42.537990   22606 main.go:141] libmachine: (ha-999305-m03)   </os>
	I0719 14:41:42.537998   22606 main.go:141] libmachine: (ha-999305-m03)   <devices>
	I0719 14:41:42.538005   22606 main.go:141] libmachine: (ha-999305-m03)     <disk type='file' device='cdrom'>
	I0719 14:41:42.538014   22606 main.go:141] libmachine: (ha-999305-m03)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/boot2docker.iso'/>
	I0719 14:41:42.538025   22606 main.go:141] libmachine: (ha-999305-m03)       <target dev='hdc' bus='scsi'/>
	I0719 14:41:42.538035   22606 main.go:141] libmachine: (ha-999305-m03)       <readonly/>
	I0719 14:41:42.538045   22606 main.go:141] libmachine: (ha-999305-m03)     </disk>
	I0719 14:41:42.538058   22606 main.go:141] libmachine: (ha-999305-m03)     <disk type='file' device='disk'>
	I0719 14:41:42.538069   22606 main.go:141] libmachine: (ha-999305-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 14:41:42.538124   22606 main.go:141] libmachine: (ha-999305-m03)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/ha-999305-m03.rawdisk'/>
	I0719 14:41:42.538150   22606 main.go:141] libmachine: (ha-999305-m03)       <target dev='hda' bus='virtio'/>
	I0719 14:41:42.538164   22606 main.go:141] libmachine: (ha-999305-m03)     </disk>
	I0719 14:41:42.538176   22606 main.go:141] libmachine: (ha-999305-m03)     <interface type='network'>
	I0719 14:41:42.538190   22606 main.go:141] libmachine: (ha-999305-m03)       <source network='mk-ha-999305'/>
	I0719 14:41:42.538200   22606 main.go:141] libmachine: (ha-999305-m03)       <model type='virtio'/>
	I0719 14:41:42.538210   22606 main.go:141] libmachine: (ha-999305-m03)     </interface>
	I0719 14:41:42.538221   22606 main.go:141] libmachine: (ha-999305-m03)     <interface type='network'>
	I0719 14:41:42.538252   22606 main.go:141] libmachine: (ha-999305-m03)       <source network='default'/>
	I0719 14:41:42.538268   22606 main.go:141] libmachine: (ha-999305-m03)       <model type='virtio'/>
	I0719 14:41:42.538277   22606 main.go:141] libmachine: (ha-999305-m03)     </interface>
	I0719 14:41:42.538285   22606 main.go:141] libmachine: (ha-999305-m03)     <serial type='pty'>
	I0719 14:41:42.538296   22606 main.go:141] libmachine: (ha-999305-m03)       <target port='0'/>
	I0719 14:41:42.538305   22606 main.go:141] libmachine: (ha-999305-m03)     </serial>
	I0719 14:41:42.538311   22606 main.go:141] libmachine: (ha-999305-m03)     <console type='pty'>
	I0719 14:41:42.538318   22606 main.go:141] libmachine: (ha-999305-m03)       <target type='serial' port='0'/>
	I0719 14:41:42.538323   22606 main.go:141] libmachine: (ha-999305-m03)     </console>
	I0719 14:41:42.538328   22606 main.go:141] libmachine: (ha-999305-m03)     <rng model='virtio'>
	I0719 14:41:42.538334   22606 main.go:141] libmachine: (ha-999305-m03)       <backend model='random'>/dev/random</backend>
	I0719 14:41:42.538342   22606 main.go:141] libmachine: (ha-999305-m03)     </rng>
	I0719 14:41:42.538347   22606 main.go:141] libmachine: (ha-999305-m03)     
	I0719 14:41:42.538351   22606 main.go:141] libmachine: (ha-999305-m03)     
	I0719 14:41:42.538357   22606 main.go:141] libmachine: (ha-999305-m03)   </devices>
	I0719 14:41:42.538362   22606 main.go:141] libmachine: (ha-999305-m03) </domain>
	I0719 14:41:42.538369   22606 main.go:141] libmachine: (ha-999305-m03) 
	I0719 14:41:42.544894   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:ed:64:e5 in network default
	I0719 14:41:42.545450   22606 main.go:141] libmachine: (ha-999305-m03) Ensuring networks are active...
	I0719 14:41:42.545465   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:42.546126   22606 main.go:141] libmachine: (ha-999305-m03) Ensuring network default is active
	I0719 14:41:42.546439   22606 main.go:141] libmachine: (ha-999305-m03) Ensuring network mk-ha-999305 is active
	I0719 14:41:42.546752   22606 main.go:141] libmachine: (ha-999305-m03) Getting domain xml...
	I0719 14:41:42.547397   22606 main.go:141] libmachine: (ha-999305-m03) Creating domain...
	I0719 14:41:43.766414   22606 main.go:141] libmachine: (ha-999305-m03) Waiting to get IP...
	I0719 14:41:43.767154   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:43.767504   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:43.767542   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:43.767479   23632 retry.go:31] will retry after 296.827647ms: waiting for machine to come up
	I0719 14:41:44.065979   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:44.066451   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:44.066481   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:44.066421   23632 retry.go:31] will retry after 340.383239ms: waiting for machine to come up
	I0719 14:41:44.407886   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:44.408379   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:44.408402   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:44.408310   23632 retry.go:31] will retry after 352.464502ms: waiting for machine to come up
	I0719 14:41:44.762806   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:44.763245   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:44.763266   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:44.763205   23632 retry.go:31] will retry after 583.331034ms: waiting for machine to come up
	I0719 14:41:45.348677   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:45.349016   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:45.349043   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:45.348905   23632 retry.go:31] will retry after 613.461603ms: waiting for machine to come up
	I0719 14:41:45.963853   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:45.964231   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:45.964262   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:45.964195   23632 retry.go:31] will retry after 690.125797ms: waiting for machine to come up
	I0719 14:41:46.656206   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:46.656663   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:46.656696   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:46.656597   23632 retry.go:31] will retry after 839.358911ms: waiting for machine to come up
	I0719 14:41:47.497863   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:47.498300   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:47.498329   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:47.498254   23632 retry.go:31] will retry after 1.407821443s: waiting for machine to come up
	I0719 14:41:48.907371   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:48.907819   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:48.907850   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:48.907772   23632 retry.go:31] will retry after 1.178162674s: waiting for machine to come up
	I0719 14:41:50.087798   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:50.088232   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:50.088258   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:50.088188   23632 retry.go:31] will retry after 1.754275136s: waiting for machine to come up
	I0719 14:41:51.844373   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:51.844818   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:51.844846   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:51.844772   23632 retry.go:31] will retry after 2.508819786s: waiting for machine to come up
	I0719 14:41:54.355224   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:54.355670   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:54.355719   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:54.355604   23632 retry.go:31] will retry after 2.253850604s: waiting for machine to come up
	I0719 14:41:56.611405   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:56.611834   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:56.611864   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:56.611788   23632 retry.go:31] will retry after 2.874253079s: waiting for machine to come up
	I0719 14:41:59.487290   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:41:59.487672   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find current IP address of domain ha-999305-m03 in network mk-ha-999305
	I0719 14:41:59.487696   22606 main.go:141] libmachine: (ha-999305-m03) DBG | I0719 14:41:59.487621   23632 retry.go:31] will retry after 5.378647907s: waiting for machine to come up
	I0719 14:42:04.870101   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.870403   22606 main.go:141] libmachine: (ha-999305-m03) Found IP for machine: 192.168.39.250
	I0719 14:42:04.870438   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has current primary IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.870448   22606 main.go:141] libmachine: (ha-999305-m03) Reserving static IP address...
	I0719 14:42:04.870941   22606 main.go:141] libmachine: (ha-999305-m03) DBG | unable to find host DHCP lease matching {name: "ha-999305-m03", mac: "52:54:00:c6:46:fe", ip: "192.168.39.250"} in network mk-ha-999305
	I0719 14:42:04.945778   22606 main.go:141] libmachine: (ha-999305-m03) Reserved static IP address: 192.168.39.250
	I0719 14:42:04.945812   22606 main.go:141] libmachine: (ha-999305-m03) Waiting for SSH to be available...
	I0719 14:42:04.945823   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Getting to WaitForSSH function...
	I0719 14:42:04.948263   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.948701   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:04.948731   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:04.948985   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Using SSH client type: external
	I0719 14:42:04.949012   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa (-rw-------)
	I0719 14:42:04.949039   22606 main.go:141] libmachine: (ha-999305-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 14:42:04.949052   22606 main.go:141] libmachine: (ha-999305-m03) DBG | About to run SSH command:
	I0719 14:42:04.949067   22606 main.go:141] libmachine: (ha-999305-m03) DBG | exit 0
	I0719 14:42:05.074442   22606 main.go:141] libmachine: (ha-999305-m03) DBG | SSH cmd err, output: <nil>: 
	I0719 14:42:05.074712   22606 main.go:141] libmachine: (ha-999305-m03) KVM machine creation complete!
	I0719 14:42:05.075089   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetConfigRaw
	I0719 14:42:05.075750   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:05.075978   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:05.076147   22606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 14:42:05.076165   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:42:05.077511   22606 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 14:42:05.077527   22606 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 14:42:05.077539   22606 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 14:42:05.077547   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.080484   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.080789   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.080804   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.081001   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.081210   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.081363   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.081515   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.081697   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.082082   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.082100   22606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 14:42:05.189746   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:42:05.189772   22606 main.go:141] libmachine: Detecting the provisioner...
	I0719 14:42:05.189782   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.192677   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.193131   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.193164   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.193320   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.193514   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.193723   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.193878   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.194084   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.194320   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.194334   22606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 14:42:05.303539   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 14:42:05.303686   22606 main.go:141] libmachine: found compatible host: buildroot
	I0719 14:42:05.303701   22606 main.go:141] libmachine: Provisioning with buildroot...
	I0719 14:42:05.303713   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:42:05.304029   22606 buildroot.go:166] provisioning hostname "ha-999305-m03"
	I0719 14:42:05.304061   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:42:05.304285   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.306863   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.307333   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.307356   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.307579   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.307778   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.307946   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.308116   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.308289   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.308441   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.308452   22606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305-m03 && echo "ha-999305-m03" | sudo tee /etc/hostname
	I0719 14:42:05.429934   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305-m03
	
	I0719 14:42:05.429966   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.432693   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.433046   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.433072   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.433200   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.433399   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.433605   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.433743   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.433892   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.434059   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.434075   22606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:42:05.552429   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:42:05.552466   22606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:42:05.552487   22606 buildroot.go:174] setting up certificates
	I0719 14:42:05.552513   22606 provision.go:84] configureAuth start
	I0719 14:42:05.552531   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetMachineName
	I0719 14:42:05.552853   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:05.555930   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.556337   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.556365   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.556681   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.559529   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.559930   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.559953   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.560153   22606 provision.go:143] copyHostCerts
	I0719 14:42:05.560190   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:42:05.560227   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:42:05.560235   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:42:05.560315   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:42:05.560407   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:42:05.560427   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:42:05.560433   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:42:05.560467   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:42:05.560525   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:42:05.560542   22606 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:42:05.560549   22606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:42:05.560584   22606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:42:05.560650   22606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305-m03 san=[127.0.0.1 192.168.39.250 ha-999305-m03 localhost minikube]
	I0719 14:42:05.673075   22606 provision.go:177] copyRemoteCerts
	I0719 14:42:05.673145   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:42:05.673170   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.676252   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.676673   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.676707   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.676885   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.677117   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.677285   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.677425   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:05.762105   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:42:05.762191   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:42:05.787706   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:42:05.787782   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 14:42:05.812938   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:42:05.813014   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 14:42:05.838283   22606 provision.go:87] duration metric: took 285.753561ms to configureAuth
	I0719 14:42:05.838310   22606 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:42:05.838652   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:42:05.838799   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:05.841681   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.842048   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:05.842078   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:05.842218   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:05.842439   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.842591   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:05.842829   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:05.842976   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:05.843178   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:05.843198   22606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:42:06.128093   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:42:06.128122   22606 main.go:141] libmachine: Checking connection to Docker...
	I0719 14:42:06.128130   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetURL
	I0719 14:42:06.129415   22606 main.go:141] libmachine: (ha-999305-m03) DBG | Using libvirt version 6000000
	I0719 14:42:06.131535   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.131940   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.131966   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.132200   22606 main.go:141] libmachine: Docker is up and running!
	I0719 14:42:06.132220   22606 main.go:141] libmachine: Reticulating splines...
	I0719 14:42:06.132229   22606 client.go:171] duration metric: took 23.865221578s to LocalClient.Create
	I0719 14:42:06.132261   22606 start.go:167] duration metric: took 23.865291689s to libmachine.API.Create "ha-999305"
	I0719 14:42:06.132271   22606 start.go:293] postStartSetup for "ha-999305-m03" (driver="kvm2")
	I0719 14:42:06.132286   22606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:42:06.132307   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.132514   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:42:06.132538   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:06.134621   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.134905   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.134927   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.135015   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.135187   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.135375   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.135551   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:06.221748   22606 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:42:06.226464   22606 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:42:06.226496   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:42:06.226580   22606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:42:06.226667   22606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:42:06.226677   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:42:06.226755   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:42:06.237126   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:42:06.263232   22606 start.go:296] duration metric: took 130.946805ms for postStartSetup
	I0719 14:42:06.263277   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetConfigRaw
	I0719 14:42:06.263869   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:06.266688   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.267104   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.267132   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.267479   22606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:42:06.267735   22606 start.go:128] duration metric: took 24.020856532s to createHost
	I0719 14:42:06.267769   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:06.270465   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.270837   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.270874   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.271037   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.271227   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.271375   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.271533   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.271706   22606 main.go:141] libmachine: Using SSH client type: native
	I0719 14:42:06.271912   22606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0719 14:42:06.271926   22606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:42:06.383378   22606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721400126.348944461
	
	I0719 14:42:06.383403   22606 fix.go:216] guest clock: 1721400126.348944461
	I0719 14:42:06.383413   22606 fix.go:229] Guest: 2024-07-19 14:42:06.348944461 +0000 UTC Remote: 2024-07-19 14:42:06.267751669 +0000 UTC m=+218.535262765 (delta=81.192792ms)
	I0719 14:42:06.383440   22606 fix.go:200] guest clock delta is within tolerance: 81.192792ms
	I0719 14:42:06.383448   22606 start.go:83] releasing machines lock for "ha-999305-m03", held for 24.136678926s
	I0719 14:42:06.383487   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.383737   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:06.386212   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.386715   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.386746   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.389297   22606 out.go:177] * Found network options:
	I0719 14:42:06.390873   22606 out.go:177]   - NO_PROXY=192.168.39.240,192.168.39.163
	W0719 14:42:06.392060   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 14:42:06.392081   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:42:06.392095   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.392741   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.392926   22606 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:42:06.393040   22606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:42:06.393082   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	W0719 14:42:06.393108   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 14:42:06.393135   22606 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 14:42:06.393230   22606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:42:06.393250   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:42:06.395892   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396195   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396241   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.396267   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396361   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.396532   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.396717   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.396749   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:06.396776   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:06.396879   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:06.396948   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:42:06.397076   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:42:06.397192   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:42:06.397435   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:42:06.651418   22606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:42:06.657681   22606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:42:06.657740   22606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:42:06.674396   22606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 14:42:06.674429   22606 start.go:495] detecting cgroup driver to use...
	I0719 14:42:06.674519   22606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:42:06.693586   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:42:06.709626   22606 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:42:06.709705   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:42:06.726709   22606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:42:06.742662   22606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:42:06.869913   22606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:42:07.020252   22606 docker.go:233] disabling docker service ...
	I0719 14:42:07.020311   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:42:07.036261   22606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:42:07.050577   22606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:42:07.211233   22606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:42:07.331892   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:42:07.347994   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:42:07.369093   22606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:42:07.369157   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.380134   22606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:42:07.380206   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.392471   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.404677   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.417011   22606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:42:07.429508   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.441319   22606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.460150   22606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:42:07.471989   22606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:42:07.482871   22606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 14:42:07.482944   22606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 14:42:07.498590   22606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:42:07.509676   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:42:07.619316   22606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:42:07.774139   22606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:42:07.774222   22606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:42:07.780154   22606 start.go:563] Will wait 60s for crictl version
	I0719 14:42:07.780218   22606 ssh_runner.go:195] Run: which crictl
	I0719 14:42:07.784105   22606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:42:07.826224   22606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:42:07.826315   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:42:07.854718   22606 ssh_runner.go:195] Run: crio --version
	I0719 14:42:07.886427   22606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:42:07.887867   22606 out.go:177]   - env NO_PROXY=192.168.39.240
	I0719 14:42:07.889136   22606 out.go:177]   - env NO_PROXY=192.168.39.240,192.168.39.163
	I0719 14:42:07.890351   22606 main.go:141] libmachine: (ha-999305-m03) Calling .GetIP
	I0719 14:42:07.894403   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:07.894758   22606 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:42:07.894774   22606 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:42:07.895059   22606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:42:07.899593   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:42:07.913074   22606 mustload.go:65] Loading cluster: ha-999305
	I0719 14:42:07.913366   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:42:07.913802   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:42:07.913856   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:42:07.930755   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45519
	I0719 14:42:07.931215   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:42:07.931704   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:42:07.931726   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:42:07.932078   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:42:07.932232   22606 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:42:07.933780   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:42:07.934078   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:42:07.934117   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:42:07.949546   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0719 14:42:07.949968   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:42:07.950463   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:42:07.950490   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:42:07.950817   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:42:07.951027   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:42:07.951239   22606 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.250
	I0719 14:42:07.951256   22606 certs.go:194] generating shared ca certs ...
	I0719 14:42:07.951291   22606 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:42:07.951521   22606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:42:07.951573   22606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:42:07.951583   22606 certs.go:256] generating profile certs ...
	I0719 14:42:07.951694   22606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:42:07.951723   22606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9
	I0719 14:42:07.951741   22606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.163 192.168.39.250 192.168.39.254]
	I0719 14:42:08.155558   22606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9 ...
	I0719 14:42:08.155589   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9: {Name:mka66c74b7110ebe18159f5d744d4156e88f5f36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:42:08.155770   22606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9 ...
	I0719 14:42:08.155784   22606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9: {Name:mk29bd0294b90a74ad2dd8700ab0de425474ddd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:42:08.155865   22606 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.23d06cc9 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:42:08.156048   22606 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.23d06cc9 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:42:08.156233   22606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:42:08.156254   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:42:08.156274   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:42:08.156291   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:42:08.156309   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:42:08.156325   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:42:08.156345   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:42:08.156364   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:42:08.156381   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:42:08.156450   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:42:08.156492   22606 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:42:08.156502   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:42:08.156532   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:42:08.156563   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:42:08.156590   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:42:08.156641   22606 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:42:08.156682   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.156707   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.156723   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.156762   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:42:08.159987   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:08.160401   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:42:08.160424   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:08.160647   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:42:08.160840   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:42:08.161020   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:42:08.161118   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:42:08.234677   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 14:42:08.240082   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 14:42:08.253164   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 14:42:08.258475   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 14:42:08.272435   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 14:42:08.278073   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 14:42:08.299545   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 14:42:08.305145   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0719 14:42:08.317125   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 14:42:08.323174   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 14:42:08.336766   22606 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 14:42:08.341856   22606 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 14:42:08.353945   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:42:08.381045   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:42:08.406809   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:42:08.432257   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:42:08.458849   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 14:42:08.484505   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 14:42:08.508480   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:42:08.534839   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:42:08.561120   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:42:08.586976   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:42:08.612690   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:42:08.638273   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 14:42:08.655516   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 14:42:08.672081   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 14:42:08.691302   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0719 14:42:08.711127   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 14:42:08.729620   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 14:42:08.749185   22606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 14:42:08.769354   22606 ssh_runner.go:195] Run: openssl version
	I0719 14:42:08.776342   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:42:08.788591   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.793899   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.793959   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:42:08.800347   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:42:08.812460   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:42:08.825067   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.830454   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.830523   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:42:08.836894   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:42:08.848857   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:42:08.860838   22606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.866461   22606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.866518   22606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:42:08.872768   22606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:42:08.884003   22606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:42:08.888485   22606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 14:42:08.888541   22606 kubeadm.go:934] updating node {m03 192.168.39.250 8443 v1.30.3 crio true true} ...
	I0719 14:42:08.888649   22606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:42:08.888685   22606 kube-vip.go:115] generating kube-vip config ...
	I0719 14:42:08.888729   22606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:42:08.904318   22606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:42:08.904380   22606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:42:08.904433   22606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:42:08.915153   22606 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 14:42:08.915205   22606 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 14:42:08.925502   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 14:42:08.925525   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:42:08.925526   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 14:42:08.925535   22606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 14:42:08.925551   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:42:08.925567   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:42:08.925577   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 14:42:08.925614   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 14:42:08.933493   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 14:42:08.933522   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 14:42:08.933534   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 14:42:08.933558   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 14:42:08.963552   22606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:42:08.963668   22606 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 14:42:09.088117   22606 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 14:42:09.088163   22606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 14:42:09.905347   22606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 14:42:09.917595   22606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 14:42:09.935190   22606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:42:09.953121   22606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 14:42:09.970645   22606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:42:09.974872   22606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 14:42:09.988254   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:42:10.123875   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:42:10.144548   22606 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:42:10.145162   22606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:42:10.145230   22606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:42:10.160984   22606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0719 14:42:10.161371   22606 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:42:10.161822   22606 main.go:141] libmachine: Using API Version  1
	I0719 14:42:10.161844   22606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:42:10.162156   22606 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:42:10.162353   22606 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:42:10.162487   22606 start.go:317] joinCluster: &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:42:10.162642   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 14:42:10.162661   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:42:10.165562   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:10.165975   22606 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:42:10.166008   22606 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:42:10.166168   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:42:10.166350   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:42:10.166501   22606 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:42:10.166618   22606 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:42:10.326567   22606 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:42:10.326703   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmmbuf.uyj5wommz39npgzn --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443"
	I0719 14:42:33.887098   22606 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kmmbuf.uyj5wommz39npgzn --discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-999305-m03 --control-plane --apiserver-advertise-address=192.168.39.250 --apiserver-bind-port=8443": (23.56035373s)
	I0719 14:42:33.887135   22606 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 14:42:34.455281   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-999305-m03 minikube.k8s.io/updated_at=2024_07_19T14_42_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=ha-999305 minikube.k8s.io/primary=false
	I0719 14:42:34.618715   22606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-999305-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 14:42:34.753210   22606 start.go:319] duration metric: took 24.590729029s to joinCluster
	I0719 14:42:34.753292   22606 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 14:42:34.753782   22606 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:42:34.754753   22606 out.go:177] * Verifying Kubernetes components...
	I0719 14:42:34.755976   22606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:42:34.939090   22606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:42:34.955163   22606 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:42:34.955521   22606 kapi.go:59] client config for ha-999305: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 14:42:34.955631   22606 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0719 14:42:34.955993   22606 node_ready.go:35] waiting up to 6m0s for node "ha-999305-m03" to be "Ready" ...
	I0719 14:42:34.956124   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:34.956135   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:34.956147   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:34.956153   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:34.959751   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:35.457117   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:35.457141   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:35.457152   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:35.457156   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:35.461180   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:35.957115   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:35.957142   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:35.957153   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:35.957159   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:35.960910   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:36.457092   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:36.457119   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:36.457130   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:36.457136   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:36.460977   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:36.956422   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:36.956446   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:36.956458   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:36.956466   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:36.961459   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:36.962412   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:37.456187   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:37.456209   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:37.456218   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:37.456224   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:37.460123   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:37.957077   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:37.957097   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:37.957108   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:37.957113   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:37.965507   22606 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 14:42:38.457128   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:38.457152   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:38.457161   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:38.457166   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:38.460458   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:38.957157   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:38.957182   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:38.957192   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:38.957199   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:38.960795   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:39.457193   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:39.457216   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:39.457227   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:39.457233   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:39.460706   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:39.462038   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:39.957050   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:39.957073   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:39.957085   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:39.957090   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:39.961127   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:40.456852   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:40.456877   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:40.456894   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:40.456902   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:40.461231   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:40.957007   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:40.957027   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:40.957033   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:40.957037   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:40.960618   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:41.456339   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:41.456406   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:41.456421   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:41.456425   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:41.460713   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:41.957108   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:41.957132   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:41.957140   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:41.957145   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:41.960629   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:41.961502   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:42.457049   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:42.457072   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:42.457090   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:42.457094   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:42.461328   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:42.956821   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:42.956848   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:42.956862   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:42.956867   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:42.959890   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:43.456347   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:43.456371   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:43.456379   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:43.456382   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:43.460228   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:43.957200   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:43.957227   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:43.957247   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:43.957252   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:43.960794   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:44.456739   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:44.456760   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:44.456768   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:44.456772   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:44.460098   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:44.460570   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:44.957076   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:44.957103   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:44.957114   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:44.957122   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:44.960760   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:45.456191   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:45.456219   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:45.456228   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:45.456233   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:45.462116   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:42:45.956610   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:45.956631   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:45.956639   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:45.956642   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:45.959898   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:46.456872   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:46.456898   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:46.456906   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:46.456909   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:46.460426   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:46.461232   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:46.956939   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:46.956962   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:46.956973   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:46.956977   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:46.960379   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:47.456802   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:47.456827   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:47.456835   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:47.456839   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:47.460272   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:47.956737   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:47.956759   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:47.956766   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:47.956769   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:47.960457   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:48.456702   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:48.456726   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:48.456740   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:48.456744   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:48.459473   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:48.956908   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:48.956947   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:48.956958   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:48.956962   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:48.960089   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:48.960793   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:49.456957   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:49.456981   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:49.456991   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:49.456996   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:49.461129   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:49.956657   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:49.956708   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:49.956721   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:49.956727   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:49.959955   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:50.456639   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:50.456663   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:50.456670   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:50.456675   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:50.463031   22606 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 14:42:50.957102   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:50.957126   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:50.957137   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:50.957146   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:50.960456   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:50.961262   22606 node_ready.go:53] node "ha-999305-m03" has status "Ready":"False"
	I0719 14:42:51.456607   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:51.456633   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.456643   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.456651   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.460171   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:51.956523   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:51.956552   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.956564   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.956571   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.960312   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:51.960845   22606 node_ready.go:49] node "ha-999305-m03" has status "Ready":"True"
	I0719 14:42:51.960866   22606 node_ready.go:38] duration metric: took 17.004855917s for node "ha-999305-m03" to be "Ready" ...
	I0719 14:42:51.960877   22606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:42:51.960946   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:51.960954   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.960961   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.960965   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.966819   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:42:51.974936   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.975026   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-9sxgr
	I0719 14:42:51.975038   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.975048   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.975052   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.977986   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.978835   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:51.978851   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.978861   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.978868   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.981993   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:51.982529   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:51.982551   22606 pod_ready.go:81] duration metric: took 7.586598ms for pod "coredns-7db6d8ff4d-9sxgr" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.982569   22606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.982644   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gtwxd
	I0719 14:42:51.982656   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.982665   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.982676   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.985021   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.985638   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:51.985653   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.985658   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.985661   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.988327   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.988790   22606 pod_ready.go:92] pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:51.988806   22606 pod_ready.go:81] duration metric: took 6.22847ms for pod "coredns-7db6d8ff4d-gtwxd" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.988818   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.988886   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305
	I0719 14:42:51.988897   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.988907   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.988914   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.991620   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.992191   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:51.992207   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.992214   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.992220   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.994617   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.995050   22606 pod_ready.go:92] pod "etcd-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:51.995070   22606 pod_ready.go:81] duration metric: took 6.240102ms for pod "etcd-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.995081   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:51.995154   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m02
	I0719 14:42:51.995165   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.995184   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.995193   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:51.997965   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:51.998609   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:51.998623   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:51.998630   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:51.998633   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.002349   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.002850   22606 pod_ready.go:92] pod "etcd-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:52.002872   22606 pod_ready.go:81] duration metric: took 7.767749ms for pod "etcd-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.002883   22606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.157318   22606 request.go:629] Waited for 154.360427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m03
	I0719 14:42:52.157390   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-999305-m03
	I0719 14:42:52.157398   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.157406   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.157409   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.161535   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:52.356970   22606 request.go:629] Waited for 194.29248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:52.357052   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:52.357064   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.357075   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.357083   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.360523   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.361168   22606 pod_ready.go:92] pod "etcd-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:52.361187   22606 pod_ready.go:81] duration metric: took 358.296734ms for pod "etcd-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.361202   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.557398   22606 request.go:629] Waited for 196.137818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:42:52.557472   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305
	I0719 14:42:52.557479   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.557487   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.557495   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.561209   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.757252   22606 request.go:629] Waited for 195.355592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:52.757304   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:52.757309   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.757316   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.757320   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.760530   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:52.761162   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:52.761184   22606 pod_ready.go:81] duration metric: took 399.974493ms for pod "kube-apiserver-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.761196   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:52.956945   22606 request.go:629] Waited for 195.673996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:42:52.957033   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m02
	I0719 14:42:52.957045   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:52.957057   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:52.957066   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:52.960450   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.156508   22606 request.go:629] Waited for 195.301603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:53.156574   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:53.156580   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.156587   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.156592   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.159883   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.160421   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:53.160437   22606 pod_ready.go:81] duration metric: took 399.233428ms for pod "kube-apiserver-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.160446   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.357066   22606 request.go:629] Waited for 196.550702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m03
	I0719 14:42:53.357162   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-999305-m03
	I0719 14:42:53.357170   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.357181   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.357189   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.364480   22606 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 14:42:53.557462   22606 request.go:629] Waited for 192.390414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:53.557546   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:53.557555   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.557563   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.557568   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.561614   22606 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 14:42:53.562163   22606 pod_ready.go:92] pod "kube-apiserver-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:53.562183   22606 pod_ready.go:81] duration metric: took 401.730871ms for pod "kube-apiserver-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.562196   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.757278   22606 request.go:629] Waited for 195.000821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:42:53.757351   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305
	I0719 14:42:53.757359   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.757370   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.757380   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.760743   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.956524   22606 request.go:629] Waited for 194.666271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:53.956588   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:53.956593   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:53.956600   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:53.956604   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:53.960380   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:53.960877   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:53.960900   22606 pod_ready.go:81] duration metric: took 398.69165ms for pod "kube-controller-manager-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:53.960914   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.156997   22606 request.go:629] Waited for 195.992358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:42:54.157052   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m02
	I0719 14:42:54.157057   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.157064   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.157071   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.160744   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:54.356667   22606 request.go:629] Waited for 195.278383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:54.356720   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:54.356726   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.356736   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.356741   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.359947   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:54.360609   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:54.360626   22606 pod_ready.go:81] duration metric: took 399.705128ms for pod "kube-controller-manager-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.360636   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.556819   22606 request.go:629] Waited for 196.1253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m03
	I0719 14:42:54.556894   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-999305-m03
	I0719 14:42:54.556899   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.556907   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.556914   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.560662   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:54.756570   22606 request.go:629] Waited for 195.272157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:54.756653   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:54.756662   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.756675   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.756682   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.759662   22606 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 14:42:54.760172   22606 pod_ready.go:92] pod "kube-controller-manager-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:54.760188   22606 pod_ready.go:81] duration metric: took 399.546786ms for pod "kube-controller-manager-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.760199   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:54.957335   22606 request.go:629] Waited for 197.078407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:42:54.957412   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-766sx
	I0719 14:42:54.957419   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:54.957429   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:54.957435   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:54.960975   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.157039   22606 request.go:629] Waited for 195.391212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:55.157098   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:55.157105   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.157117   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.157123   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.160448   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.161110   22606 pod_ready.go:92] pod "kube-proxy-766sx" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:55.161130   22606 pod_ready.go:81] duration metric: took 400.924486ms for pod "kube-proxy-766sx" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.161139   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.356583   22606 request.go:629] Waited for 195.367291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:42:55.356643   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2wb7
	I0719 14:42:55.356648   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.356655   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.356661   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.362651   22606 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 14:42:55.556973   22606 request.go:629] Waited for 193.379237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:55.557038   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:55.557043   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.557051   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.557055   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.560246   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.560961   22606 pod_ready.go:92] pod "kube-proxy-s2wb7" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:55.560982   22606 pod_ready.go:81] duration metric: took 399.837176ms for pod "kube-proxy-s2wb7" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.560993   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-twh47" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.757035   22606 request.go:629] Waited for 195.977548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-twh47
	I0719 14:42:55.757099   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-twh47
	I0719 14:42:55.757106   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.757117   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.757123   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.760958   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.957308   22606 request.go:629] Waited for 195.431235ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:55.957386   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:55.957393   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:55.957401   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:55.957408   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:55.961039   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:55.961994   22606 pod_ready.go:92] pod "kube-proxy-twh47" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:55.962010   22606 pod_ready.go:81] duration metric: took 401.011812ms for pod "kube-proxy-twh47" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:55.962019   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.157218   22606 request.go:629] Waited for 195.136362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:42:56.157296   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305
	I0719 14:42:56.157303   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.157311   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.157317   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.160596   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.357526   22606 request.go:629] Waited for 196.357454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:56.357593   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305
	I0719 14:42:56.357604   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.357616   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.357622   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.360940   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.361494   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:56.361511   22606 pod_ready.go:81] duration metric: took 399.485902ms for pod "kube-scheduler-ha-999305" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.361520   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.556614   22606 request.go:629] Waited for 195.031893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:42:56.556681   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m02
	I0719 14:42:56.556690   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.556697   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.556703   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.560254   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.757456   22606 request.go:629] Waited for 196.362234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:56.757542   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m02
	I0719 14:42:56.757554   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.757563   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.757573   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.760801   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:56.761609   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:56.761626   22606 pod_ready.go:81] duration metric: took 400.100607ms for pod "kube-scheduler-ha-999305-m02" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.761634   22606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:56.956788   22606 request.go:629] Waited for 195.098635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m03
	I0719 14:42:56.956861   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-999305-m03
	I0719 14:42:56.956867   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:56.956874   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:56.956881   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:56.959944   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.157059   22606 request.go:629] Waited for 196.355561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:57.157120   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-999305-m03
	I0719 14:42:57.157135   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.157143   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.157146   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.160298   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.161973   22606 pod_ready.go:92] pod "kube-scheduler-ha-999305-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 14:42:57.161993   22606 pod_ready.go:81] duration metric: took 400.352789ms for pod "kube-scheduler-ha-999305-m03" in "kube-system" namespace to be "Ready" ...
	I0719 14:42:57.162005   22606 pod_ready.go:38] duration metric: took 5.2011011s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 14:42:57.162024   22606 api_server.go:52] waiting for apiserver process to appear ...
	I0719 14:42:57.162077   22606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:42:57.180381   22606 api_server.go:72] duration metric: took 22.427053068s to wait for apiserver process to appear ...
	I0719 14:42:57.180398   22606 api_server.go:88] waiting for apiserver healthz status ...
	I0719 14:42:57.180420   22606 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0719 14:42:57.184804   22606 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0719 14:42:57.184870   22606 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0719 14:42:57.184877   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.184884   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.184890   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.185745   22606 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 14:42:57.185803   22606 api_server.go:141] control plane version: v1.30.3
	I0719 14:42:57.185820   22606 api_server.go:131] duration metric: took 5.414651ms to wait for apiserver health ...
	I0719 14:42:57.185832   22606 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 14:42:57.357581   22606 request.go:629] Waited for 171.672444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.357642   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.357649   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.357663   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.357671   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.365728   22606 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 14:42:57.372152   22606 system_pods.go:59] 24 kube-system pods found
	I0719 14:42:57.372186   22606 system_pods.go:61] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:42:57.372191   22606 system_pods.go:61] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:42:57.372195   22606 system_pods.go:61] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:42:57.372198   22606 system_pods.go:61] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:42:57.372203   22606 system_pods.go:61] "etcd-ha-999305-m03" [f15da934-29c7-444e-9e54-155ef0fb3145] Running
	I0719 14:42:57.372207   22606 system_pods.go:61] "kindnet-b7lvb" [fdca060a-b2bf-4c7c-aea7-289593af789f] Running
	I0719 14:42:57.372210   22606 system_pods.go:61] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:42:57.372214   22606 system_pods.go:61] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:42:57.372217   22606 system_pods.go:61] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:42:57.372222   22606 system_pods.go:61] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:42:57.372229   22606 system_pods.go:61] "kube-apiserver-ha-999305-m03" [d02979f6-fd79-424c-a802-f40f6c484689] Running
	I0719 14:42:57.372238   22606 system_pods.go:61] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:42:57.372248   22606 system_pods.go:61] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:42:57.372256   22606 system_pods.go:61] "kube-controller-manager-ha-999305-m03" [2f599812-e46f-4151-aae3-37d551e7b26e] Running
	I0719 14:42:57.372262   22606 system_pods.go:61] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:42:57.372271   22606 system_pods.go:61] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:42:57.372280   22606 system_pods.go:61] "kube-proxy-twh47" [dabe7d25-8bd8-42f8-9efd-0c800be277b3] Running
	I0719 14:42:57.372287   22606 system_pods.go:61] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:42:57.372296   22606 system_pods.go:61] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:42:57.372305   22606 system_pods.go:61] "kube-scheduler-ha-999305-m03" [ba5e9e04-3ebb-4839-8b1f-df899690be04] Running
	I0719 14:42:57.372311   22606 system_pods.go:61] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:42:57.372319   22606 system_pods.go:61] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:42:57.372325   22606 system_pods.go:61] "kube-vip-ha-999305-m03" [c47c9bb1-e77b-40a3-a92f-9702dbb222ff] Running
	I0719 14:42:57.372331   22606 system_pods.go:61] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:42:57.372343   22606 system_pods.go:74] duration metric: took 186.500633ms to wait for pod list to return data ...
	I0719 14:42:57.372357   22606 default_sa.go:34] waiting for default service account to be created ...
	I0719 14:42:57.556905   22606 request.go:629] Waited for 184.456317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:42:57.556971   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0719 14:42:57.556979   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.556991   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.557000   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.560115   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.560229   22606 default_sa.go:45] found service account: "default"
	I0719 14:42:57.560243   22606 default_sa.go:55] duration metric: took 187.875258ms for default service account to be created ...
	I0719 14:42:57.560251   22606 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 14:42:57.757551   22606 request.go:629] Waited for 197.240039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.757646   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0719 14:42:57.757654   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.757663   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.757669   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.764689   22606 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 14:42:57.772768   22606 system_pods.go:86] 24 kube-system pods found
	I0719 14:42:57.772795   22606 system_pods.go:89] "coredns-7db6d8ff4d-9sxgr" [f394b2d0-345c-4f2c-9c30-4c7c8c13361b] Running
	I0719 14:42:57.772800   22606 system_pods.go:89] "coredns-7db6d8ff4d-gtwxd" [8ccad831-1940-4a7c-bea7-a73b07f9d3a2] Running
	I0719 14:42:57.772804   22606 system_pods.go:89] "etcd-ha-999305" [80889bd1-d6c9-404f-a23a-92238bee5c5a] Running
	I0719 14:42:57.772809   22606 system_pods.go:89] "etcd-ha-999305-m02" [875db75b-0368-4883-8e7e-fe9be86d032d] Running
	I0719 14:42:57.772813   22606 system_pods.go:89] "etcd-ha-999305-m03" [f15da934-29c7-444e-9e54-155ef0fb3145] Running
	I0719 14:42:57.772817   22606 system_pods.go:89] "kindnet-b7lvb" [fdca060a-b2bf-4c7c-aea7-289593af789f] Running
	I0719 14:42:57.772821   22606 system_pods.go:89] "kindnet-hsb9f" [0110cef5-fa4d-4ee8-934d-2cdf2b8f6d2a] Running
	I0719 14:42:57.772825   22606 system_pods.go:89] "kindnet-tpffr" [e6847e94-cf07-4fa7-9729-dca36c54672e] Running
	I0719 14:42:57.772829   22606 system_pods.go:89] "kube-apiserver-ha-999305" [6eec2917-02cc-4f56-b86e-326fd045eca4] Running
	I0719 14:42:57.772832   22606 system_pods.go:89] "kube-apiserver-ha-999305-m02" [2de3b4e4-e2ed-4771-973b-29550d781217] Running
	I0719 14:42:57.772836   22606 system_pods.go:89] "kube-apiserver-ha-999305-m03" [d02979f6-fd79-424c-a802-f40f6c484689] Running
	I0719 14:42:57.772840   22606 system_pods.go:89] "kube-controller-manager-ha-999305" [62152115-c62b-421d-bee6-3f8f342132b2] Running
	I0719 14:42:57.772844   22606 system_pods.go:89] "kube-controller-manager-ha-999305-m02" [41d3319e-07ff-4744-8439-39afaf2f052e] Running
	I0719 14:42:57.772849   22606 system_pods.go:89] "kube-controller-manager-ha-999305-m03" [2f599812-e46f-4151-aae3-37d551e7b26e] Running
	I0719 14:42:57.772853   22606 system_pods.go:89] "kube-proxy-766sx" [277263a7-c68c-4aaa-8e02-6e121cf57215] Running
	I0719 14:42:57.772857   22606 system_pods.go:89] "kube-proxy-s2wb7" [3f96f5ff-96c6-460c-b8da-23d5dda42745] Running
	I0719 14:42:57.772862   22606 system_pods.go:89] "kube-proxy-twh47" [dabe7d25-8bd8-42f8-9efd-0c800be277b3] Running
	I0719 14:42:57.772867   22606 system_pods.go:89] "kube-scheduler-ha-999305" [949b590d-826f-4e87-b128-2a855b692df5] Running
	I0719 14:42:57.772875   22606 system_pods.go:89] "kube-scheduler-ha-999305-m02" [204cf39e-0ac8-4960-9188-b31b263ddca1] Running
	I0719 14:42:57.772879   22606 system_pods.go:89] "kube-scheduler-ha-999305-m03" [ba5e9e04-3ebb-4839-8b1f-df899690be04] Running
	I0719 14:42:57.772884   22606 system_pods.go:89] "kube-vip-ha-999305" [81ac3b87-e88d-4ee9-98ca-5c098350c157] Running
	I0719 14:42:57.772889   22606 system_pods.go:89] "kube-vip-ha-999305-m02" [a53de8c8-3847-4110-bbc8-09f99f377c63] Running
	I0719 14:42:57.772894   22606 system_pods.go:89] "kube-vip-ha-999305-m03" [c47c9bb1-e77b-40a3-a92f-9702dbb222ff] Running
	I0719 14:42:57.772898   22606 system_pods.go:89] "storage-provisioner" [5dc00743-8980-495b-9a44-c3d3d42829f6] Running
	I0719 14:42:57.772906   22606 system_pods.go:126] duration metric: took 212.648177ms to wait for k8s-apps to be running ...
	I0719 14:42:57.772915   22606 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 14:42:57.772953   22606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:42:57.790515   22606 system_svc.go:56] duration metric: took 17.590313ms WaitForService to wait for kubelet
	I0719 14:42:57.790544   22606 kubeadm.go:582] duration metric: took 23.037217643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:42:57.790570   22606 node_conditions.go:102] verifying NodePressure condition ...
	I0719 14:42:57.956745   22606 request.go:629] Waited for 166.090864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0719 14:42:57.956807   22606 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0719 14:42:57.956812   22606 round_trippers.go:469] Request Headers:
	I0719 14:42:57.956819   22606 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 14:42:57.956826   22606 round_trippers.go:473]     Accept: application/json, */*
	I0719 14:42:57.960793   22606 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 14:42:57.961763   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:42:57.961785   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:42:57.961802   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:42:57.961806   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:42:57.961815   22606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 14:42:57.961820   22606 node_conditions.go:123] node cpu capacity is 2
	I0719 14:42:57.961823   22606 node_conditions.go:105] duration metric: took 171.248783ms to run NodePressure ...
	I0719 14:42:57.961836   22606 start.go:241] waiting for startup goroutines ...
	I0719 14:42:57.961861   22606 start.go:255] writing updated cluster config ...
	I0719 14:42:57.962141   22606 ssh_runner.go:195] Run: rm -f paused
	I0719 14:42:58.014427   22606 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 14:42:58.016272   22606 out.go:177] * Done! kubectl is now configured to use "ha-999305" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.221572874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400456221552663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a41cccb-0c6d-4264-9d4a-0b2e699b74d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.222022584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c89a0b3b-a908-42e3-b5bb-2585c7e85ee0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.222100697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c89a0b3b-a908-42e3-b5bb-2585c7e85ee0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.222357694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c89a0b3b-a908-42e3-b5bb-2585c7e85ee0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.260694917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5308a3bf-cb0e-4d40-8316-726e14266e80 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.260952795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5308a3bf-cb0e-4d40-8316-726e14266e80 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.262190816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b182e2a-2ef0-4648-8124-b6f61083de14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.262824506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400456262622087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b182e2a-2ef0-4648-8124-b6f61083de14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.264061092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39121b34-885b-45b7-80bf-d54002f7df2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.264141255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39121b34-885b-45b7-80bf-d54002f7df2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.264370297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39121b34-885b-45b7-80bf-d54002f7df2e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.306369658Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8da1f0b-c217-443a-90f2-ed2698357953 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.306445860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8da1f0b-c217-443a-90f2-ed2698357953 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.307952725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c7a8580-81f5-4dad-8f5d-79902b277586 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.308382979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400456308359944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c7a8580-81f5-4dad-8f5d-79902b277586 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.308990210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4568b88-f5ee-483a-b7f0-744fff4514e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.309060604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4568b88-f5ee-483a-b7f0-744fff4514e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.309302994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4568b88-f5ee-483a-b7f0-744fff4514e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.345413010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa2dd22f-5895-4588-b47d-0f3698566547 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.345509953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa2dd22f-5895-4588-b47d-0f3698566547 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.346804790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50cb1c18-9fae-40c5-ac96-6ed197be0993 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.347278879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400456347257056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50cb1c18-9fae-40c5-ac96-6ed197be0993 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.347646828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e758081f-8c0b-4759-9461-296ffa88078a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.347716853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e758081f-8c0b-4759-9461-296ffa88078a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:47:36 ha-999305 crio[679]: time="2024-07-19 14:47:36.348144152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400183757439383,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b8829b3ccfbccffcea77c646a0313b76259e84d201b6fa6a2b4787eafd2487f,PodSandboxId:6fe36c95a046d8f7dd330e7f201575dc1be2363a5683eadfb9b675917ad20d9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721399970954538986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970872160066,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721399970877622371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-19
40-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213999
58717300027,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721399958388702112,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e,PodSandboxId:63a7f05b44c0048eaf5e90fadfc64b156ac318e8ddff3d9dfe1be59b2d013505,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721399942790614131,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5d37397e8bbe14fd0a6ff822ddd78e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721399938926777180,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721399938874976202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf,PodSandboxId:7ac313e234322de20ce89b525f60ba636a8615042797dc561276a60eefdf5e2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721399938879803325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723,PodSandboxId:e32b9b9f27b98b049b6f85da1f0fbfa94b2d100d846e68b344c426da115d5079,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721399938814551383,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e758081f-8c0b-4759-9461-296ffa88078a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d401082f94c28       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   f0b7b801c04fe       busybox-fc5497c4f-2rfw6
	7b8829b3ccfbc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   6fe36c95a046d       storage-provisioner
	8a1cd64a0c897       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   35affd85abc52       coredns-7db6d8ff4d-gtwxd
	60ddffbf7c51f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   1eb500abeaf59       coredns-7db6d8ff4d-9sxgr
	f411cdcc4b000       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      8 minutes ago       Running             kindnet-cni               0                   b21ce83a41d26       kindnet-tpffr
	3df47e2e7e71d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   0bc58fc40b11b       kube-proxy-s2wb7
	f81aa97ac4ed4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   63a7f05b44c00       kube-vip-ha-999305
	4106d6aa51360       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   4fe960d43fbe4       etcd-ha-999305
	85e5d02964a27       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   7ac313e234322       kube-controller-manager-ha-999305
	eea532e07ff56       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   01e1ea6c3d6e9       kube-scheduler-ha-999305
	21f9837a6d159       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   e32b9b9f27b98       kube-apiserver-ha-999305
	
	
	==> coredns [60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75] <==
	[INFO] 10.244.2.2:39902 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000117355s
	[INFO] 10.244.1.2:47815 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090125s
	[INFO] 10.244.0.4:60010 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113473s
	[INFO] 10.244.0.4:58011 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130179s
	[INFO] 10.244.0.4:42306 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.008431977s
	[INFO] 10.244.0.4:37231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199786s
	[INFO] 10.244.0.4:46408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015144s
	[INFO] 10.244.2.2:44298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253661s
	[INFO] 10.244.2.2:46320 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124288s
	[INFO] 10.244.2.2:55428 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001507596s
	[INFO] 10.244.2.2:49678 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072967s
	[INFO] 10.244.1.2:50895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001783712s
	[INFO] 10.244.1.2:40165 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093772s
	[INFO] 10.244.1.2:53172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001252641s
	[INFO] 10.244.1.2:34815 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105356s
	[INFO] 10.244.1.2:37850 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213269s
	[INFO] 10.244.2.2:37470 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132796s
	[INFO] 10.244.1.2:53739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116332s
	[INFO] 10.244.1.2:49785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150432s
	[INFO] 10.244.1.2:39191 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095042s
	[INFO] 10.244.0.4:54115 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158247s
	[INFO] 10.244.2.2:54824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010194s
	[INFO] 10.244.2.2:53937 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137939s
	[INFO] 10.244.2.2:32859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135977s
	[INFO] 10.244.1.2:38346 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011678s
	
	
	==> coredns [8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59] <==
	[INFO] 10.244.0.4:57271 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004136013s
	[INFO] 10.244.0.4:41245 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000207213s
	[INFO] 10.244.0.4:53550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131429s
	[INFO] 10.244.2.2:43045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176233s
	[INFO] 10.244.2.2:58868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001941494s
	[INFO] 10.244.2.2:46158 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115413s
	[INFO] 10.244.2.2:48082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182529s
	[INFO] 10.244.1.2:43898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136537s
	[INFO] 10.244.1.2:41884 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111392s
	[INFO] 10.244.1.2:37393 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070881s
	[INFO] 10.244.0.4:38875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088591s
	[INFO] 10.244.0.4:39118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123769s
	[INFO] 10.244.0.4:52630 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045788s
	[INFO] 10.244.0.4:40500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041439s
	[INFO] 10.244.2.2:60125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195649s
	[INFO] 10.244.2.2:60453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126438s
	[INFO] 10.244.2.2:49851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00022498s
	[INFO] 10.244.1.2:57692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010212s
	[INFO] 10.244.0.4:59894 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230322s
	[INFO] 10.244.0.4:42506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177637s
	[INFO] 10.244.0.4:53162 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099069s
	[INFO] 10.244.2.2:44371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126437s
	[INFO] 10.244.1.2:47590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107441s
	[INFO] 10.244.1.2:44734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130206s
	[INFO] 10.244.1.2:33311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075949s
	
	
	==> describe nodes <==
	Name:               ha-999305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T14_39_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:47:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:43:11 +0000   Fri, 19 Jul 2024 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-999305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1230c1bed065421db8c3e4d5f899877a
	  System UUID:                1230c1be-d065-421d-b8c3-e4d5f899877a
	  Boot ID:                    7e7082ac-a784-4d5a-9539-9692157a7b3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2rfw6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 coredns-7db6d8ff4d-9sxgr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m18s
	  kube-system                 coredns-7db6d8ff4d-gtwxd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m18s
	  kube-system                 etcd-ha-999305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m31s
	  kube-system                 kindnet-tpffr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m19s
	  kube-system                 kube-apiserver-ha-999305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-controller-manager-ha-999305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-proxy-s2wb7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  kube-system                 kube-scheduler-ha-999305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-vip-ha-999305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m17s  kube-proxy       
	  Normal  Starting                 8m31s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m31s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m31s  kubelet          Node ha-999305 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s  kubelet          Node ha-999305 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s  kubelet          Node ha-999305 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m20s  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal  NodeReady                8m6s   kubelet          Node ha-999305 status is now: NodeReady
	  Normal  RegisteredNode           6m3s   node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal  RegisteredNode           4m48s  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	
	
	Name:               ha-999305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_41_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:41:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:44:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 14:43:19 +0000   Fri, 19 Jul 2024 14:45:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-999305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27a97bc8637c4fba94a7bb397a84b598
	  System UUID:                27a97bc8-637c-4fba-94a7-bb397a84b598
	  Boot ID:                    88201b08-f5f5-4c30-bf5f-464ac33b5a26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pcfwd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-999305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 kindnet-hsb9f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	  kube-system                 kube-apiserver-ha-999305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-controller-manager-ha-999305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-proxy-766sx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-scheduler-ha-999305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-vip-ha-999305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  RegisteredNode           6m20s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-999305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x7 over 6m20s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeNotReady             2m34s                  node-controller  Node ha-999305-m02 status is now: NodeNotReady
	
	
	Name:               ha-999305-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_42_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:47:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:43:31 +0000   Fri, 19 Jul 2024 14:42:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-999305-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c04be1041e3417f9ec04f3f6a94b977
	  System UUID:                6c04be10-41e3-417f-9ec0-4f3f6a94b977
	  Boot ID:                    5da44c63-207b-4952-951a-477e5f92088f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6kcdj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-999305-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-b7lvb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m6s
	  kube-system                 kube-apiserver-ha-999305-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-ha-999305-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-twh47                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-scheduler-ha-999305-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-999305-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-999305-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                 node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal  RegisteredNode           4m48s                node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	
	
	Name:               ha-999305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_43_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:47:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:44:09 +0000   Fri, 19 Jul 2024 14:43:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-999305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d4c450135c44d386a1cb39310dd813
	  System UUID:                74d4c450-135c-44d3-86a1-cb39310dd813
	  Boot ID:                    afc8d137-990f-4f0e-9995-8644e493fa47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j9gzv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m58s
	  kube-system                 kube-proxy-qqtph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m58s (x2 over 3m58s)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x2 over 3m58s)  kubelet          Node ha-999305-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x2 over 3m58s)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal  NodeReady                3m37s                  kubelet          Node ha-999305-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 14:38] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050186] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040031] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.519354] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.261259] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.592080] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.149646] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.056448] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062757] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.176758] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.118673] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.280022] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.245148] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.893793] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.060163] kauditd_printk_skb: 158 callbacks suppressed
	[Jul19 14:39] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.183971] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +6.719863] kauditd_printk_skb: 23 callbacks suppressed
	[ +19.024750] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 14:41] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50] <==
	{"level":"warn","ts":"2024-07-19T14:47:36.604338Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.613666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.620651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.624111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.636054Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.64401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.650416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.654705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.658673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.667997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.67858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.685398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.688693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.691948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.699148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.704124Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.70652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.715616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.719108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.722174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.727312Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.733813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.740707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.742679Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T14:47:36.803676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:47:36 up 9 min,  0 users,  load average: 0.48, 0.36, 0.19
	Linux ha-999305 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d] <==
	I0719 14:46:59.899096       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:47:09.897755       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:47:09.897807       1 main.go:303] handling current node
	I0719 14:47:09.897834       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:47:09.897839       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:47:09.898086       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:47:09.898114       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:47:09.898188       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:47:09.898216       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:47:19.891987       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:47:19.892042       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:47:19.892237       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:47:19.892262       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:47:19.892310       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:47:19.892329       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:47:19.892371       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:47:19.892392       1 main.go:303] handling current node
	I0719 14:47:29.893924       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:47:29.894004       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:47:29.894162       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:47:29.894189       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:47:29.894241       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:47:29.894261       1 main.go:303] handling current node
	I0719 14:47:29.894272       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:47:29.894277       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723] <==
	W0719 14:39:03.633749       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240]
	I0719 14:39:03.634787       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 14:39:03.639484       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 14:39:03.767631       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 14:39:05.189251       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 14:39:05.207425       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 14:39:05.349604       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 14:39:17.882309       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 14:39:17.947829       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 14:43:04.792716       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53552: use of closed network connection
	E0719 14:43:04.984394       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53572: use of closed network connection
	E0719 14:43:05.183081       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53594: use of closed network connection
	E0719 14:43:05.406412       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53610: use of closed network connection
	E0719 14:43:05.590573       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53632: use of closed network connection
	E0719 14:43:05.781090       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45438: use of closed network connection
	E0719 14:43:05.968015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45452: use of closed network connection
	E0719 14:43:06.162066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45474: use of closed network connection
	E0719 14:43:06.350426       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45486: use of closed network connection
	E0719 14:43:06.643771       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45508: use of closed network connection
	E0719 14:43:06.830162       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45522: use of closed network connection
	E0719 14:43:07.031495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45540: use of closed network connection
	E0719 14:43:07.209952       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45556: use of closed network connection
	E0719 14:43:07.380822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45576: use of closed network connection
	E0719 14:43:07.548203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45590: use of closed network connection
	W0719 14:44:33.653101       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240 192.168.39.250]
	
	
	==> kube-controller-manager [85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf] <==
	I0719 14:42:58.964691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.41µs"
	I0719 14:42:58.965497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.65µs"
	I0719 14:42:58.965993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.853µs"
	I0719 14:42:59.171041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="185.461714ms"
	I0719 14:42:59.252632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.957377ms"
	I0719 14:42:59.276267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.565212ms"
	I0719 14:42:59.276371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.353µs"
	I0719 14:42:59.376837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.676174ms"
	I0719 14:42:59.377108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.428µs"
	I0719 14:43:00.290301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.286µs"
	I0719 14:43:02.515412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.114416ms"
	I0719 14:43:02.515696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.239µs"
	I0719 14:43:02.786433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.235802ms"
	I0719 14:43:02.786611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.037µs"
	I0719 14:43:04.370083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.452877ms"
	I0719 14:43:04.370219       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.487µs"
	E0719 14:43:38.102577       1 certificate_controller.go:146] Sync csr-m2cbg failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-m2cbg": the object has been modified; please apply your changes to the latest version and try again
	I0719 14:43:38.369779       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-999305-m04\" does not exist"
	E0719 14:43:38.374446       1 certificate_controller.go:146] Sync csr-m2cbg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-m2cbg": the object has been modified; please apply your changes to the latest version and try again
	I0719 14:43:38.404367       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-999305-m04" podCIDRs=["10.244.3.0/24"]
	I0719 14:43:42.034735       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-999305-m04"
	I0719 14:43:59.610941       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-999305-m04"
	I0719 14:45:02.078302       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-999305-m04"
	I0719 14:45:02.324143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.422779ms"
	I0719 14:45:02.324265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.854µs"
	
	
	==> kube-proxy [3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859] <==
	I0719 14:39:18.608465       1 server_linux.go:69] "Using iptables proxy"
	I0719 14:39:18.624270       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	I0719 14:39:18.670607       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 14:39:18.670721       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 14:39:18.670752       1 server_linux.go:165] "Using iptables Proxier"
	I0719 14:39:18.674486       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 14:39:18.675012       1 server.go:872] "Version info" version="v1.30.3"
	I0719 14:39:18.675061       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:39:18.679010       1 config.go:192] "Starting service config controller"
	I0719 14:39:18.679057       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 14:39:18.679097       1 config.go:101] "Starting endpoint slice config controller"
	I0719 14:39:18.679112       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 14:39:18.679404       1 config.go:319] "Starting node config controller"
	I0719 14:39:18.679431       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 14:39:18.779949       1 shared_informer.go:320] Caches are synced for node config
	I0719 14:39:18.779995       1 shared_informer.go:320] Caches are synced for service config
	I0719 14:39:18.780022       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a] <==
	W0719 14:39:03.143941       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:39:03.143983       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:39:03.199239       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 14:39:03.199283       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 14:39:06.302977       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 14:42:30.410490       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-twh47\": pod kube-proxy-twh47 is already assigned to node \"ha-999305-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-twh47" node="ha-999305-m03"
	E0719 14:42:30.410669       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dabe7d25-8bd8-42f8-9efd-0c800be277b3(kube-system/kube-proxy-twh47) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-twh47"
	E0719 14:42:30.410708       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-twh47\": pod kube-proxy-twh47 is already assigned to node \"ha-999305-m03\"" pod="kube-system/kube-proxy-twh47"
	I0719 14:42:30.410775       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-twh47" node="ha-999305-m03"
	E0719 14:43:38.472698       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jb992\": pod kube-proxy-jb992 is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jb992" node="ha-999305-m04"
	E0719 14:43:38.472918       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jb992\": pod kube-proxy-jb992 is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-jb992"
	E0719 14:43:38.481922       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k2hnq\": pod kindnet-k2hnq is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k2hnq" node="ha-999305-m04"
	E0719 14:43:38.482316       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 503c7a35-1ec2-49e3-b043-d756666fdefc(kube-system/kindnet-k2hnq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k2hnq"
	E0719 14:43:38.482355       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k2hnq\": pod kindnet-k2hnq is already assigned to node \"ha-999305-m04\"" pod="kube-system/kindnet-k2hnq"
	I0719 14:43:38.482384       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k2hnq" node="ha-999305-m04"
	E0719 14:43:38.610384       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fwx2c\": pod kube-proxy-fwx2c is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fwx2c" node="ha-999305-m04"
	E0719 14:43:38.610464       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8577838c-7ee2-44bf-bda8-e924f05aa0c0(kube-system/kube-proxy-fwx2c) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fwx2c"
	E0719 14:43:38.610495       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fwx2c\": pod kube-proxy-fwx2c is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-fwx2c"
	I0719 14:43:38.610518       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fwx2c" node="ha-999305-m04"
	E0719 14:43:40.343563       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rjfnq\": pod kube-proxy-rjfnq is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rjfnq" node="ha-999305-m04"
	E0719 14:43:40.343715       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rjfnq\": pod kube-proxy-rjfnq is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-rjfnq"
	E0719 14:43:40.369471       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sg9tk\": pod kube-proxy-sg9tk is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sg9tk" node="ha-999305-m04"
	E0719 14:43:40.369724       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 84ce1807-2a03-4bd0-ba20-a8230833533c(kube-system/kube-proxy-sg9tk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-sg9tk"
	E0719 14:43:40.369777       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sg9tk\": pod kube-proxy-sg9tk is already assigned to node \"ha-999305-m04\"" pod="kube-system/kube-proxy-sg9tk"
	I0719 14:43:40.369852       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sg9tk" node="ha-999305-m04"
	
	
	==> kubelet <==
	Jul 19 14:43:05 ha-999305 kubelet[1369]: E0719 14:43:05.413257    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:43:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:43:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:43:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:43:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:44:05 ha-999305 kubelet[1369]: E0719 14:44:05.412838    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:44:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:44:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:44:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:44:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:45:05 ha-999305 kubelet[1369]: E0719 14:45:05.413438    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:45:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:45:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:45:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:45:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:46:05 ha-999305 kubelet[1369]: E0719 14:46:05.412017    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:46:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:46:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:46:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:46:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:47:05 ha-999305 kubelet[1369]: E0719 14:47:05.411073    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:47:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:47:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:47:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:47:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-999305 -n ha-999305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-999305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-999305 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-999305 -v=7 --alsologtostderr
E0719 14:47:56.717120   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:49:28.744213   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-999305 -v=7 --alsologtostderr: exit status 82 (2m1.897561049s)

                                                
                                                
-- stdout --
	* Stopping node "ha-999305-m04"  ...
	* Stopping node "ha-999305-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:47:38.208459   28688 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:47:38.208552   28688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:38.208565   28688 out.go:304] Setting ErrFile to fd 2...
	I0719 14:47:38.208574   28688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:47:38.208742   28688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:47:38.208932   28688 out.go:298] Setting JSON to false
	I0719 14:47:38.209018   28688 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:38.209341   28688 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:38.209424   28688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:47:38.209590   28688 mustload.go:65] Loading cluster: ha-999305
	I0719 14:47:38.209725   28688 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:47:38.209752   28688 stop.go:39] StopHost: ha-999305-m04
	I0719 14:47:38.210130   28688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:38.210166   28688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:38.225029   28688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0719 14:47:38.225483   28688 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:38.226053   28688 main.go:141] libmachine: Using API Version  1
	I0719 14:47:38.226074   28688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:38.226461   28688 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:38.229310   28688 out.go:177] * Stopping node "ha-999305-m04"  ...
	I0719 14:47:38.230542   28688 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 14:47:38.230584   28688 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:47:38.230842   28688 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 14:47:38.230876   28688 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:47:38.233542   28688 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:38.234098   28688 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:43:22 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:47:38.234132   28688 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:47:38.234193   28688 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:47:38.234366   28688 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:47:38.234541   28688 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:47:38.234683   28688 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:47:38.327100   28688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 14:47:38.383198   28688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 14:47:38.437242   28688 main.go:141] libmachine: Stopping "ha-999305-m04"...
	I0719 14:47:38.437285   28688 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:38.438779   28688 main.go:141] libmachine: (ha-999305-m04) Calling .Stop
	I0719 14:47:38.442019   28688 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 0/120
	I0719 14:47:39.646107   28688 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:47:39.647420   28688 main.go:141] libmachine: Machine "ha-999305-m04" was stopped.
	I0719 14:47:39.647439   28688 stop.go:75] duration metric: took 1.416898212s to stop
	I0719 14:47:39.647462   28688 stop.go:39] StopHost: ha-999305-m03
	I0719 14:47:39.647740   28688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:47:39.647774   28688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:47:39.662451   28688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39749
	I0719 14:47:39.662901   28688 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:47:39.663367   28688 main.go:141] libmachine: Using API Version  1
	I0719 14:47:39.663392   28688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:47:39.663659   28688 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:47:39.665566   28688 out.go:177] * Stopping node "ha-999305-m03"  ...
	I0719 14:47:39.666695   28688 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 14:47:39.666715   28688 main.go:141] libmachine: (ha-999305-m03) Calling .DriverName
	I0719 14:47:39.666899   28688 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 14:47:39.666918   28688 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHHostname
	I0719 14:47:39.669732   28688 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:39.670315   28688 main.go:141] libmachine: (ha-999305-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:46:fe", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:41:56 +0000 UTC Type:0 Mac:52:54:00:c6:46:fe Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:ha-999305-m03 Clientid:01:52:54:00:c6:46:fe}
	I0719 14:47:39.670350   28688 main.go:141] libmachine: (ha-999305-m03) DBG | domain ha-999305-m03 has defined IP address 192.168.39.250 and MAC address 52:54:00:c6:46:fe in network mk-ha-999305
	I0719 14:47:39.670436   28688 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHPort
	I0719 14:47:39.670596   28688 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHKeyPath
	I0719 14:47:39.670738   28688 main.go:141] libmachine: (ha-999305-m03) Calling .GetSSHUsername
	I0719 14:47:39.670872   28688 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m03/id_rsa Username:docker}
	I0719 14:47:39.758766   28688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 14:47:39.813288   28688 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 14:47:39.870567   28688 main.go:141] libmachine: Stopping "ha-999305-m03"...
	I0719 14:47:39.870588   28688 main.go:141] libmachine: (ha-999305-m03) Calling .GetState
	I0719 14:47:39.872179   28688 main.go:141] libmachine: (ha-999305-m03) Calling .Stop
	I0719 14:47:39.875893   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 0/120
	I0719 14:47:40.877552   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 1/120
	I0719 14:47:41.879197   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 2/120
	I0719 14:47:42.880911   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 3/120
	I0719 14:47:43.882259   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 4/120
	I0719 14:47:44.884217   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 5/120
	I0719 14:47:45.885714   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 6/120
	I0719 14:47:46.887260   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 7/120
	I0719 14:47:47.888940   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 8/120
	I0719 14:47:48.890230   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 9/120
	I0719 14:47:49.892347   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 10/120
	I0719 14:47:50.893856   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 11/120
	I0719 14:47:51.895208   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 12/120
	I0719 14:47:52.896684   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 13/120
	I0719 14:47:53.897989   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 14/120
	I0719 14:47:54.899550   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 15/120
	I0719 14:47:55.901108   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 16/120
	I0719 14:47:56.902428   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 17/120
	I0719 14:47:57.903890   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 18/120
	I0719 14:47:58.905059   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 19/120
	I0719 14:47:59.907170   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 20/120
	I0719 14:48:00.908995   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 21/120
	I0719 14:48:01.910742   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 22/120
	I0719 14:48:02.912391   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 23/120
	I0719 14:48:03.914619   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 24/120
	I0719 14:48:04.915979   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 25/120
	I0719 14:48:05.917589   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 26/120
	I0719 14:48:06.919251   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 27/120
	I0719 14:48:07.920575   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 28/120
	I0719 14:48:08.922332   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 29/120
	I0719 14:48:09.924493   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 30/120
	I0719 14:48:10.926106   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 31/120
	I0719 14:48:11.927755   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 32/120
	I0719 14:48:12.929300   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 33/120
	I0719 14:48:13.930717   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 34/120
	I0719 14:48:14.932472   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 35/120
	I0719 14:48:15.933889   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 36/120
	I0719 14:48:16.935112   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 37/120
	I0719 14:48:17.936577   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 38/120
	I0719 14:48:18.937870   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 39/120
	I0719 14:48:19.939570   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 40/120
	I0719 14:48:20.940821   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 41/120
	I0719 14:48:21.942073   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 42/120
	I0719 14:48:22.943319   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 43/120
	I0719 14:48:23.944629   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 44/120
	I0719 14:48:24.946147   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 45/120
	I0719 14:48:25.947438   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 46/120
	I0719 14:48:26.948860   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 47/120
	I0719 14:48:27.950115   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 48/120
	I0719 14:48:28.951980   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 49/120
	I0719 14:48:29.953346   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 50/120
	I0719 14:48:30.954844   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 51/120
	I0719 14:48:31.956250   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 52/120
	I0719 14:48:32.957557   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 53/120
	I0719 14:48:33.959003   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 54/120
	I0719 14:48:34.960862   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 55/120
	I0719 14:48:35.962183   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 56/120
	I0719 14:48:36.963649   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 57/120
	I0719 14:48:37.965094   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 58/120
	I0719 14:48:38.966476   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 59/120
	I0719 14:48:39.968519   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 60/120
	I0719 14:48:40.970029   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 61/120
	I0719 14:48:41.971362   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 62/120
	I0719 14:48:42.972785   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 63/120
	I0719 14:48:43.973926   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 64/120
	I0719 14:48:44.975562   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 65/120
	I0719 14:48:45.976865   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 66/120
	I0719 14:48:46.978270   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 67/120
	I0719 14:48:47.979631   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 68/120
	I0719 14:48:48.981265   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 69/120
	I0719 14:48:49.983447   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 70/120
	I0719 14:48:50.984793   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 71/120
	I0719 14:48:51.986742   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 72/120
	I0719 14:48:52.988006   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 73/120
	I0719 14:48:53.989326   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 74/120
	I0719 14:48:54.991158   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 75/120
	I0719 14:48:55.992458   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 76/120
	I0719 14:48:56.993770   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 77/120
	I0719 14:48:57.995155   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 78/120
	I0719 14:48:58.996573   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 79/120
	I0719 14:48:59.998199   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 80/120
	I0719 14:49:00.999537   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 81/120
	I0719 14:49:02.001211   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 82/120
	I0719 14:49:03.002483   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 83/120
	I0719 14:49:04.004388   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 84/120
	I0719 14:49:05.005975   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 85/120
	I0719 14:49:06.007476   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 86/120
	I0719 14:49:07.008886   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 87/120
	I0719 14:49:08.010185   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 88/120
	I0719 14:49:09.011562   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 89/120
	I0719 14:49:10.013286   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 90/120
	I0719 14:49:11.014629   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 91/120
	I0719 14:49:12.015797   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 92/120
	I0719 14:49:13.016972   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 93/120
	I0719 14:49:14.018494   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 94/120
	I0719 14:49:15.020605   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 95/120
	I0719 14:49:16.021907   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 96/120
	I0719 14:49:17.023258   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 97/120
	I0719 14:49:18.024420   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 98/120
	I0719 14:49:19.025925   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 99/120
	I0719 14:49:20.028055   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 100/120
	I0719 14:49:21.029257   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 101/120
	I0719 14:49:22.030375   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 102/120
	I0719 14:49:23.031466   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 103/120
	I0719 14:49:24.032766   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 104/120
	I0719 14:49:25.034552   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 105/120
	I0719 14:49:26.035770   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 106/120
	I0719 14:49:27.037037   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 107/120
	I0719 14:49:28.038273   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 108/120
	I0719 14:49:29.039400   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 109/120
	I0719 14:49:30.040903   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 110/120
	I0719 14:49:31.042129   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 111/120
	I0719 14:49:32.043639   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 112/120
	I0719 14:49:33.045117   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 113/120
	I0719 14:49:34.046433   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 114/120
	I0719 14:49:35.048106   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 115/120
	I0719 14:49:36.049669   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 116/120
	I0719 14:49:37.051779   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 117/120
	I0719 14:49:38.053011   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 118/120
	I0719 14:49:39.054660   28688 main.go:141] libmachine: (ha-999305-m03) Waiting for machine to stop 119/120
	I0719 14:49:40.055530   28688 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 14:49:40.055595   28688 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 14:49:40.057569   28688 out.go:177] 
	W0719 14:49:40.059127   28688 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 14:49:40.059151   28688 out.go:239] * 
	* 
	W0719 14:49:40.061395   28688 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 14:49:40.063681   28688 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-999305 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-999305 --wait=true -v=7 --alsologtostderr
E0719 14:50:51.790767   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:52:29.032165   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-999305 --wait=true -v=7 --alsologtostderr: (4m14.367953732s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-999305
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-999305 -n ha-999305
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-999305 logs -n 25: (2.068174542s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m04 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp testdata/cp-test.txt                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305:/home/docker/cp-test_ha-999305-m04_ha-999305.txt                      |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305 sudo cat                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305.txt                                |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03:/home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m03 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-999305 node stop m02 -v=7                                                    | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-999305 node start m02 -v=7                                                   | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-999305 -v=7                                                          | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-999305 -v=7                                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-999305 --wait=true -v=7                                                   | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:49 UTC | 19 Jul 24 14:53 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-999305                                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:53 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:49:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:49:40.105178   29144 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:49:40.105299   29144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:49:40.105308   29144 out.go:304] Setting ErrFile to fd 2...
	I0719 14:49:40.105313   29144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:49:40.105496   29144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:49:40.106030   29144 out.go:298] Setting JSON to false
	I0719 14:49:40.106945   29144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1926,"bootTime":1721398654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:49:40.106999   29144 start.go:139] virtualization: kvm guest
	I0719 14:49:40.109542   29144 out.go:177] * [ha-999305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:49:40.111084   29144 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:49:40.111117   29144 notify.go:220] Checking for updates...
	I0719 14:49:40.114041   29144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:49:40.115378   29144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:49:40.116728   29144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:49:40.118107   29144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:49:40.119301   29144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:49:40.120850   29144 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:49:40.120962   29144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:49:40.121365   29144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:49:40.121422   29144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:49:40.136080   29144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0719 14:49:40.136542   29144 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:49:40.137035   29144 main.go:141] libmachine: Using API Version  1
	I0719 14:49:40.137055   29144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:49:40.137437   29144 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:49:40.137618   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:49:40.173709   29144 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 14:49:40.175045   29144 start.go:297] selected driver: kvm2
	I0719 14:49:40.175071   29144 start.go:901] validating driver "kvm2" against &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:49:40.175218   29144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:49:40.175698   29144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:49:40.175785   29144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:49:40.191425   29144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:49:40.192147   29144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:49:40.192178   29144 cni.go:84] Creating CNI manager for ""
	I0719 14:49:40.192184   29144 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 14:49:40.192244   29144 start.go:340] cluster config:
	{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:49:40.192393   29144 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:49:40.194356   29144 out.go:177] * Starting "ha-999305" primary control-plane node in "ha-999305" cluster
	I0719 14:49:40.195722   29144 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:49:40.195763   29144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:49:40.195773   29144 cache.go:56] Caching tarball of preloaded images
	I0719 14:49:40.195843   29144 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:49:40.195854   29144 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:49:40.195975   29144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:49:40.196182   29144 start.go:360] acquireMachinesLock for ha-999305: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:49:40.196227   29144 start.go:364] duration metric: took 26.699µs to acquireMachinesLock for "ha-999305"
	I0719 14:49:40.196242   29144 start.go:96] Skipping create...Using existing machine configuration
	I0719 14:49:40.196246   29144 fix.go:54] fixHost starting: 
	I0719 14:49:40.196493   29144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:49:40.196526   29144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:49:40.211853   29144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0719 14:49:40.212348   29144 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:49:40.212800   29144 main.go:141] libmachine: Using API Version  1
	I0719 14:49:40.212826   29144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:49:40.213108   29144 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:49:40.213296   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:49:40.213473   29144 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:49:40.215328   29144 fix.go:112] recreateIfNeeded on ha-999305: state=Running err=<nil>
	W0719 14:49:40.215346   29144 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 14:49:40.217258   29144 out.go:177] * Updating the running kvm2 "ha-999305" VM ...
	I0719 14:49:40.218473   29144 machine.go:94] provisionDockerMachine start ...
	I0719 14:49:40.218492   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:49:40.218701   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.221061   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.221505   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.221531   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.221711   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.221877   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.222019   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.222179   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.222394   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:40.222607   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:40.222619   29144 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 14:49:40.327399   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305
	
	I0719 14:49:40.327427   29144 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:49:40.327660   29144 buildroot.go:166] provisioning hostname "ha-999305"
	I0719 14:49:40.327688   29144 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:49:40.327887   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.330384   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.330865   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.330885   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.331058   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.331238   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.331385   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.331504   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.331627   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:40.331865   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:40.331884   29144 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305 && echo "ha-999305" | sudo tee /etc/hostname
	I0719 14:49:40.451751   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305
	
	I0719 14:49:40.451776   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.454251   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.454709   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.454737   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.454931   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.455132   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.455335   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.455505   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.455644   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:40.455800   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:40.455814   29144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:49:40.559254   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:49:40.559282   29144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:49:40.559298   29144 buildroot.go:174] setting up certificates
	I0719 14:49:40.559305   29144 provision.go:84] configureAuth start
	I0719 14:49:40.559313   29144 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:49:40.559560   29144 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:49:40.562143   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.562606   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.562631   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.562744   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.564772   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.565162   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.565183   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.565325   29144 provision.go:143] copyHostCerts
	I0719 14:49:40.565363   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:49:40.565410   29144 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:49:40.565422   29144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:49:40.565489   29144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:49:40.565574   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:49:40.565599   29144 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:49:40.565610   29144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:49:40.565637   29144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:49:40.565692   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:49:40.565715   29144 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:49:40.565721   29144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:49:40.565742   29144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:49:40.565798   29144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305 san=[127.0.0.1 192.168.39.240 ha-999305 localhost minikube]
	I0719 14:49:40.845539   29144 provision.go:177] copyRemoteCerts
	I0719 14:49:40.845596   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:49:40.845630   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.848385   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.848705   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.848738   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.848907   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.849129   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.849292   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.849439   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:49:40.933577   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:49:40.933642   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:49:40.963389   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:49:40.963486   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0719 14:49:40.996244   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:49:40.996321   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 14:49:41.022200   29144 provision.go:87] duration metric: took 462.869786ms to configureAuth
	I0719 14:49:41.022226   29144 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:49:41.022460   29144 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:49:41.022539   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:41.025092   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:41.025445   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:41.025465   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:41.025637   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:41.025856   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:41.026012   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:41.026156   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:41.026320   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:41.026489   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:41.026503   29144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:51:11.875061   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:51:11.875094   29144 machine.go:97] duration metric: took 1m31.656610142s to provisionDockerMachine
	I0719 14:51:11.875107   29144 start.go:293] postStartSetup for "ha-999305" (driver="kvm2")
	I0719 14:51:11.875118   29144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:51:11.875134   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:11.875513   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:51:11.875545   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:11.878578   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:11.878994   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:11.879019   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:11.879148   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:11.879332   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:11.879486   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:11.879615   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:51:11.966200   29144 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:51:11.970465   29144 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:51:11.970491   29144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:51:11.970549   29144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:51:11.970639   29144 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:51:11.970649   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:51:11.970747   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:51:11.980960   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:51:12.005929   29144 start.go:296] duration metric: took 130.807251ms for postStartSetup
	I0719 14:51:12.005983   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.006313   29144 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0719 14:51:12.006340   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.009115   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.009479   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.009501   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.009629   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.009819   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.009985   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.010282   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	W0719 14:51:12.093343   29144 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0719 14:51:12.093372   29144 fix.go:56] duration metric: took 1m31.897125382s for fixHost
	I0719 14:51:12.093393   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.096574   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.097014   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.097040   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.097188   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.097373   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.097542   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.097697   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.097867   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:51:12.098071   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:51:12.098086   29144 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:51:12.203077   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721400672.166782476
	
	I0719 14:51:12.203098   29144 fix.go:216] guest clock: 1721400672.166782476
	I0719 14:51:12.203104   29144 fix.go:229] Guest: 2024-07-19 14:51:12.166782476 +0000 UTC Remote: 2024-07-19 14:51:12.093379426 +0000 UTC m=+92.020235625 (delta=73.40305ms)
	I0719 14:51:12.203139   29144 fix.go:200] guest clock delta is within tolerance: 73.40305ms
	I0719 14:51:12.203150   29144 start.go:83] releasing machines lock for "ha-999305", held for 1m32.006913425s
	I0719 14:51:12.203176   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.203438   29144 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:51:12.205781   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.206156   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.206178   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.206357   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.206894   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.207057   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.207127   29144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:51:12.207176   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.207244   29144 ssh_runner.go:195] Run: cat /version.json
	I0719 14:51:12.207266   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.210002   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210028   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210394   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.210425   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210450   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.210467   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210546   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.210730   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.210758   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.210919   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.210930   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.211106   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.211092   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:51:12.211259   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:51:12.287688   29144 ssh_runner.go:195] Run: systemctl --version
	I0719 14:51:12.313613   29144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:51:12.473291   29144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:51:12.484625   29144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:51:12.484697   29144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:51:12.495546   29144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 14:51:12.495572   29144 start.go:495] detecting cgroup driver to use...
	I0719 14:51:12.495642   29144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:51:12.514801   29144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:51:12.529894   29144 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:51:12.529951   29144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:51:12.544809   29144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:51:12.559050   29144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:51:12.709134   29144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:51:12.868990   29144 docker.go:233] disabling docker service ...
	I0719 14:51:12.869072   29144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:51:12.887504   29144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:51:12.902635   29144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:51:13.050912   29144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:51:13.216101   29144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:51:13.231503   29144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:51:13.250302   29144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:51:13.250359   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.261245   29144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:51:13.261291   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.272145   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.283029   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.294785   29144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:51:13.306215   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.316900   29144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.328175   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.338756   29144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:51:13.348275   29144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:51:13.358022   29144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:51:13.500550   29144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:51:16.528461   29144 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.027870368s)
	I0719 14:51:16.528488   29144 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:51:16.528534   29144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:51:16.533546   29144 start.go:563] Will wait 60s for crictl version
	I0719 14:51:16.533603   29144 ssh_runner.go:195] Run: which crictl
	I0719 14:51:16.537406   29144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:51:16.573517   29144 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:51:16.573591   29144 ssh_runner.go:195] Run: crio --version
	I0719 14:51:16.603842   29144 ssh_runner.go:195] Run: crio --version
	I0719 14:51:16.636051   29144 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:51:16.637775   29144 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:51:16.640426   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:16.640849   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:16.640872   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:16.641120   29144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:51:16.645986   29144 kubeadm.go:883] updating cluster {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 14:51:16.646105   29144 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:51:16.646148   29144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:51:16.688344   29144 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:51:16.688367   29144 crio.go:433] Images already preloaded, skipping extraction
	I0719 14:51:16.688409   29144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:51:16.728168   29144 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:51:16.728187   29144 cache_images.go:84] Images are preloaded, skipping loading
	I0719 14:51:16.728198   29144 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.30.3 crio true true} ...
	I0719 14:51:16.728318   29144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:51:16.728392   29144 ssh_runner.go:195] Run: crio config
	I0719 14:51:16.778642   29144 cni.go:84] Creating CNI manager for ""
	I0719 14:51:16.778663   29144 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 14:51:16.778674   29144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 14:51:16.778707   29144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-999305 NodeName:ha-999305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 14:51:16.778877   29144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-999305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 14:51:16.778901   29144 kube-vip.go:115] generating kube-vip config ...
	I0719 14:51:16.778948   29144 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:51:16.790748   29144 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:51:16.790879   29144 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:51:16.790931   29144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:51:16.800505   29144 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 14:51:16.800576   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 14:51:16.810375   29144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 14:51:16.827616   29144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:51:16.844601   29144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 14:51:16.861415   29144 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 14:51:16.879337   29144 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:51:16.883332   29144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:51:17.029861   29144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:51:17.044852   29144 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.240
	I0719 14:51:17.044876   29144 certs.go:194] generating shared ca certs ...
	I0719 14:51:17.044897   29144 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:51:17.045072   29144 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:51:17.045125   29144 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:51:17.045136   29144 certs.go:256] generating profile certs ...
	I0719 14:51:17.045225   29144 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:51:17.045258   29144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f
	I0719 14:51:17.045276   29144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.163 192.168.39.250 192.168.39.254]
	I0719 14:51:17.130449   29144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f ...
	I0719 14:51:17.130478   29144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f: {Name:mk555a387d73727c036dcc44a211fbe6bf73fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:51:17.130653   29144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f ...
	I0719 14:51:17.130664   29144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f: {Name:mk56197489fc1e516512c7ab5eb629df8c3584da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:51:17.130740   29144 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:51:17.130880   29144 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:51:17.130995   29144 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:51:17.131009   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:51:17.131023   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:51:17.131036   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:51:17.131049   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:51:17.131064   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:51:17.131077   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:51:17.131093   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:51:17.131104   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:51:17.131147   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:51:17.131176   29144 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:51:17.131186   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:51:17.131206   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:51:17.131226   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:51:17.131248   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:51:17.131283   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:51:17.131307   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.131320   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.131332   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.131889   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:51:17.158384   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:51:17.187884   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:51:17.212975   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:51:17.236097   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 14:51:17.259429   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 14:51:17.283132   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:51:17.307510   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:51:17.331508   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:51:17.355030   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:51:17.378712   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:51:17.403275   29144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 14:51:17.420223   29144 ssh_runner.go:195] Run: openssl version
	I0719 14:51:17.426598   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:51:17.437802   29144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.442458   29144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.442524   29144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.448406   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:51:17.459192   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:51:17.470709   29144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.475219   29144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.475261   29144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.480867   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:51:17.490595   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:51:17.501614   29144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.506068   29144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.506110   29144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.511698   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:51:17.521977   29144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:51:17.526469   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 14:51:17.532280   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 14:51:17.537838   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 14:51:17.543443   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 14:51:17.548942   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 14:51:17.554944   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 14:51:17.560732   29144 kubeadm.go:392] StartCluster: {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:51:17.560844   29144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 14:51:17.560879   29144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 14:51:17.596964   29144 cri.go:89] found id: "ace5d8da1e7011556e4585b160750ff561f4af73291bb21315e2b342283bdd93"
	I0719 14:51:17.596987   29144 cri.go:89] found id: "b61c51a97f4faba8350ec052b78fd4b55bf293186fbf7143cd98e00900cec56d"
	I0719 14:51:17.596993   29144 cri.go:89] found id: "5f8e15e50f632211d067214d031357c0f3c9c63aa5eca7feda3a937c498ab8f2"
	I0719 14:51:17.596998   29144 cri.go:89] found id: "40a0d71907cfc4362041b0e195a73d22bae97dc344275aaa2da78693faa9d053"
	I0719 14:51:17.597002   29144 cri.go:89] found id: "8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59"
	I0719 14:51:17.597006   29144 cri.go:89] found id: "60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75"
	I0719 14:51:17.597010   29144 cri.go:89] found id: "f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d"
	I0719 14:51:17.597014   29144 cri.go:89] found id: "3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859"
	I0719 14:51:17.597018   29144 cri.go:89] found id: "f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e"
	I0719 14:51:17.597025   29144 cri.go:89] found id: "4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50"
	I0719 14:51:17.597029   29144 cri.go:89] found id: "85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf"
	I0719 14:51:17.597049   29144 cri.go:89] found id: "eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a"
	I0719 14:51:17.597056   29144 cri.go:89] found id: "21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723"
	I0719 14:51:17.597060   29144 cri.go:89] found id: ""
	I0719 14:51:17.597107   29144 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.296305600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3954db3a-464f-4f32-b9a9-e83421b2a63a name=/runtime.v1.RuntimeService/Version
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.297454559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2412eec5-f6af-4ad4-bfb6-bd132bb47f81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.298369983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400835298345478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2412eec5-f6af-4ad4-bfb6-bd132bb47f81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.299928673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e12c8814-d328-4ef2-8daf-a62e3d4d7b2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.300198075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e12c8814-d328-4ef2-8daf-a62e3d4d7b2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.300658276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e12c8814-d328-4ef2-8daf-a62e3d4d7b2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.344463380Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be1206bc-409a-46a1-9427-9d5ebb7d119a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.344859196Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-2rfw6,Uid:25cd3990-0ad4-44e2-895c-4e8c81e621af,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400715735007464,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T14:42:58.937052586Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-999305,Uid:4a7dfba96665bd8b5110250981ccbb6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721400696382662773,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{kubernetes.io/config.hash: 4a7dfba96665bd8b5110250981ccbb6a,kubernetes.io/config.seen: 2024-07-19T14:51:16.843683464Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9sxgr,Uid:f394b2d0-345c-4f2c-9c30-4c7c8c13361b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682116414158,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-19T14:39:30.297390383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-999305,Uid:225afe64001307a6e59a1e30b782f3b5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682062117232,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 225afe64001307a6e59a1e30b782f3b5,kubernetes.io/config.seen: 2024-07-19T14:39:05.281844365Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gtwxd,Uid:8ccad831-1940-4a7c-bea7-a73b07f9d3a2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READ
Y,CreatedAt:1721400682059990239,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T14:39:30.289499264Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&PodSandboxMetadata{Name:etcd-ha-999305,Uid:a7c6c44e50a74c1ab1df915e3708a4b0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682058734329,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.240:2379,k
ubernetes.io/config.hash: a7c6c44e50a74c1ab1df915e3708a4b0,kubernetes.io/config.seen: 2024-07-19T14:39:05.281832880Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&PodSandboxMetadata{Name:kindnet-tpffr,Uid:e6847e94-cf07-4fa7-9729-dca36c54672e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682056858203,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T14:39:17.948577282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&PodSandboxMetadata{Name:kube-proxy-s2wb7,Uid:3f96f5ff-96c6-460c-
b8da-23d5dda42745,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682026495006,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T14:39:17.943808191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5dc00743-8980-495b-9a44-c3d3d42829f6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682023131614,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T14:39:30.304569101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&PodSandboxMetadata{Name:kube-
apiserver-ha-999305,Uid:7f97b8931ee147a8b6b7be70edef5c8c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400682020154064,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.240:8443,kubernetes.io/config.hash: 7f97b8931ee147a8b6b7be70edef5c8c,kubernetes.io/config.seen: 2024-07-19T14:39:05.281842486Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-999305,Uid:7dc610418d0256f750b6fcb062df4e70,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721400681997702229,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7dc610418d0256f750b6fcb062df4e70,kubernetes.io/config.seen: 2024-07-19T14:39:05.281843509Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=be1206bc-409a-46a1-9427-9d5ebb7d119a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.346317639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=077ec38c-a6b3-4adb-b335-ab8c9ca2727e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.346420304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=077ec38c-a6b3-4adb-b335-ab8c9ca2727e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.346753899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"T
CP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64
001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=077ec38c-a6b3-4adb-b335-ab8c9ca2727e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.355767124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef70de31-819b-460a-a048-92abec7f57d2 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.355844782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef70de31-819b-460a-a048-92abec7f57d2 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.360061227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f9f8f74-6c9d-4795-ab0a-f66671221020 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.360656734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400835360629659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f9f8f74-6c9d-4795-ab0a-f66671221020 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.366980296Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47565348-72ef-4c17-abe9-05553619753c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.367117035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47565348-72ef-4c17-abe9-05553619753c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.367700771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47565348-72ef-4c17-abe9-05553619753c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.422830196Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1952a42-f029-4c47-9746-4cfdf0e849cc name=/runtime.v1.RuntimeService/Version
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.423094385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1952a42-f029-4c47-9746-4cfdf0e849cc name=/runtime.v1.RuntimeService/Version
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.425161578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b8c86bd-2c43-4998-b906-38c84815dd32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.425976413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400835425947679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b8c86bd-2c43-4998-b906-38c84815dd32 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.427232645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=234d8671-17b1-4893-ac24-6090902b31ec name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.427328303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=234d8671-17b1-4893-ac24-6090902b31ec name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:53:55 ha-999305 crio[3814]: time="2024-07-19 14:53:55.427852558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=234d8671-17b1-4893-ac24-6090902b31ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5d2dbf7d61853       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      56 seconds ago       Running             storage-provisioner       5                   903035f21620c       storage-provisioner
	29a4d735b7ed0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   9f9bfe7833947       kube-apiserver-ha-999305
	dfafedeba739f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   598787c3a3b5c       kube-controller-manager-ha-999305
	7feb50f1802cc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   d11211ae99c55       busybox-fc5497c4f-2rfw6
	7c3a36939e35f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   d2ea66b603ce5       kube-vip-ha-999305
	61f7ef08f69aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   07b84953816d3       coredns-7db6d8ff4d-9sxgr
	2c7086de4c9cf       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      2 minutes ago        Running             kindnet-cni               1                   d7f9fb434d518       kindnet-tpffr
	4abf3705cf91b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   2f475acbf17ee       etcd-ha-999305
	a7e947cd85904       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   aa808b8f714e4       kube-proxy-s2wb7
	3b4f082bbec57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   903035f21620c       storage-provisioner
	95527d65a8d52       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   6106ded0c51a3       coredns-7db6d8ff4d-gtwxd
	6af80a77cbda0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   768143bcd1894       kube-scheduler-ha-999305
	fa3370ea85a8d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   598787c3a3b5c       kube-controller-manager-ha-999305
	1d0019ea14b1f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   9f9bfe7833947       kube-apiserver-ha-999305
	d401082f94c28       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   f0b7b801c04fe       busybox-fc5497c4f-2rfw6
	8a1cd64a0c897       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   35affd85abc52       coredns-7db6d8ff4d-gtwxd
	60ddffbf7c51f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   1eb500abeaf59       coredns-7db6d8ff4d-9sxgr
	f411cdcc4b000       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      14 minutes ago       Exited              kindnet-cni               0                   b21ce83a41d26       kindnet-tpffr
	3df47e2e7e71d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   0bc58fc40b11b       kube-proxy-s2wb7
	4106d6aa51360       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   4fe960d43fbe4       etcd-ha-999305
	eea532e07ff56       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   01e1ea6c3d6e9       kube-scheduler-ha-999305
	
	
	==> coredns [60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75] <==
	[INFO] 10.244.0.4:37231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199786s
	[INFO] 10.244.0.4:46408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015144s
	[INFO] 10.244.2.2:44298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253661s
	[INFO] 10.244.2.2:46320 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124288s
	[INFO] 10.244.2.2:55428 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001507596s
	[INFO] 10.244.2.2:49678 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072967s
	[INFO] 10.244.1.2:50895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001783712s
	[INFO] 10.244.1.2:40165 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093772s
	[INFO] 10.244.1.2:53172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001252641s
	[INFO] 10.244.1.2:34815 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105356s
	[INFO] 10.244.1.2:37850 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213269s
	[INFO] 10.244.2.2:37470 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132796s
	[INFO] 10.244.1.2:53739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116332s
	[INFO] 10.244.1.2:49785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150432s
	[INFO] 10.244.1.2:39191 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095042s
	[INFO] 10.244.0.4:54115 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158247s
	[INFO] 10.244.2.2:54824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010194s
	[INFO] 10.244.2.2:53937 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137939s
	[INFO] 10.244.2.2:32859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135977s
	[INFO] 10.244.1.2:38346 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011678s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e] <==
	Trace[1998108003]: [10.387021259s] [10.387021259s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:33670->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33680->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1105691522]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 14:51:34.688) (total time: 10133ms):
	Trace[1105691522]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33680->10.96.0.1:443: read: connection reset by peer 10133ms (14:51:44.822)
	Trace[1105691522]: [10.133947768s] [10.133947768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33680->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59] <==
	[INFO] 10.244.0.4:53550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131429s
	[INFO] 10.244.2.2:43045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176233s
	[INFO] 10.244.2.2:58868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001941494s
	[INFO] 10.244.2.2:46158 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115413s
	[INFO] 10.244.2.2:48082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182529s
	[INFO] 10.244.1.2:43898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136537s
	[INFO] 10.244.1.2:41884 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111392s
	[INFO] 10.244.1.2:37393 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070881s
	[INFO] 10.244.0.4:38875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088591s
	[INFO] 10.244.0.4:39118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123769s
	[INFO] 10.244.0.4:52630 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045788s
	[INFO] 10.244.0.4:40500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041439s
	[INFO] 10.244.2.2:60125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195649s
	[INFO] 10.244.2.2:60453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126438s
	[INFO] 10.244.2.2:49851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00022498s
	[INFO] 10.244.1.2:57692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010212s
	[INFO] 10.244.0.4:59894 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230322s
	[INFO] 10.244.0.4:42506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177637s
	[INFO] 10.244.0.4:53162 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099069s
	[INFO] 10.244.2.2:44371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126437s
	[INFO] 10.244.1.2:47590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107441s
	[INFO] 10.244.1.2:44734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130206s
	[INFO] 10.244.1.2:33311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075949s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3] <==
	Trace[1082151208]: [10.00115321s] [10.00115321s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:32878->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:32878->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:32864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1762988053]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 14:51:34.511) (total time: 10311ms):
	Trace[1762988053]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:32864->10.96.0.1:443: read: connection reset by peer 10311ms (14:51:44.822)
	Trace[1762988053]: [10.311303354s] [10.311303354s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:32864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-999305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T14_39_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:53:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-999305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1230c1bed065421db8c3e4d5f899877a
	  System UUID:                1230c1be-d065-421d-b8c3-e4d5f899877a
	  Boot ID:                    7e7082ac-a784-4d5a-9539-9692157a7b3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2rfw6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-9sxgr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-gtwxd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-999305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-tpffr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-999305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-999305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-s2wb7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-999305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-999305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 108s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-999305 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-999305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-999305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-999305 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Warning  ContainerGCFailed        2m50s (x2 over 3m50s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           103s                   node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   RegisteredNode           98s                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	
	
	Name:               ha-999305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_41_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:53:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-999305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27a97bc8637c4fba94a7bb397a84b598
	  System UUID:                27a97bc8-637c-4fba-94a7-bb397a84b598
	  Boot ID:                    976ac5cc-cf36-40bb-b39c-6e0ab51c2d42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pcfwd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-999305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-hsb9f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-999305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-999305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-766sx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-999305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-999305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 90s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-999305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-999305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-999305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeNotReady             8m53s                  node-controller  Node ha-999305-m02 status is now: NodeNotReady
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node ha-999305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m15s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s                   node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           98s                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           33s                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	
	
	Name:               ha-999305-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_42_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:53:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:53:24 +0000   Fri, 19 Jul 2024 14:52:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:53:24 +0000   Fri, 19 Jul 2024 14:52:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:53:24 +0000   Fri, 19 Jul 2024 14:52:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:53:24 +0000   Fri, 19 Jul 2024 14:52:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    ha-999305-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c04be1041e3417f9ec04f3f6a94b977
	  System UUID:                6c04be10-41e3-417f-9ec0-4f3f6a94b977
	  Boot ID:                    a8b7a3da-3aa0-4424-b649-e891da91b6ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6kcdj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-999305-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-b7lvb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-999305-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-999305-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-twh47                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-999305-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-999305-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 45s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-999305-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-999305-m03 status is now: NodeNotReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s (x3 over 63s)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x3 over 63s)  kubelet          Node ha-999305-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x3 over 63s)  kubelet          Node ha-999305-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 63s (x2 over 63s)  kubelet          Node ha-999305-m03 has been rebooted, boot id: a8b7a3da-3aa0-4424-b649-e891da91b6ed
	  Normal   NodeReady                63s (x2 over 63s)  kubelet          Node ha-999305-m03 status is now: NodeReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-999305-m03 event: Registered Node ha-999305-m03 in Controller
	
	
	Name:               ha-999305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_43_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:43:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:53:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:53:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:53:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:53:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:53:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-999305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d4c450135c44d386a1cb39310dd813
	  System UUID:                74d4c450-135c-44d3-86a1-cb39310dd813
	  Boot ID:                    e27ebc77-8f7e-410d-8686-f4482a9e2888
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-j9gzv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-qqtph    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-999305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   NodeReady                9m57s              kubelet          Node ha-999305-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s               node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-999305-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-999305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-999305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-999305-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-999305-m04 has been rebooted, boot id: e27ebc77-8f7e-410d-8686-f4482a9e2888
	  Normal   NodeReady                9s                 kubelet          Node ha-999305-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.149646] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.056448] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062757] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.176758] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.118673] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.280022] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.245148] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.893793] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.060163] kauditd_printk_skb: 158 callbacks suppressed
	[Jul19 14:39] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.183971] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +6.719863] kauditd_printk_skb: 23 callbacks suppressed
	[ +19.024750] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 14:41] kauditd_printk_skb: 26 callbacks suppressed
	[Jul19 14:48] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 14:51] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +0.154974] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.190442] systemd-fstab-generator[3759]: Ignoring "noauto" option for root device
	[  +0.162552] systemd-fstab-generator[3772]: Ignoring "noauto" option for root device
	[  +0.287386] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +3.521466] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	[  +5.118809] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.341591] kauditd_printk_skb: 85 callbacks suppressed
	[Jul19 14:52] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50] <==
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T14:49:41.190071Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:49:40.5802Z","time spent":"609.864297ms","remote":"127.0.0.1:39214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T14:49:41.190085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:49:33.328829Z","time spent":"7.861252178s","remote":"127.0.0.1:39056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T14:49:41.189865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:49:40.571681Z","time spent":"618.179005ms","remote":"127.0.0.1:39450","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-19T14:49:41.232513Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-19T14:49:41.232688Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.232732Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.232778Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.232939Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233098Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233132Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233141Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233156Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233174Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233266Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233338Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233418Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233462Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.236958Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2024-07-19T14:49:41.237121Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2024-07-19T14:49:41.237165Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-999305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	
	
	==> etcd [4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43] <==
	{"level":"warn","ts":"2024-07-19T14:52:53.848587Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"cb8c47d19ac5c821","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:53.865802Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"cb8c47d19ac5c821","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:54.133799Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"cb8c47d19ac5c821","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:54.133862Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cb8c47d19ac5c821","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:58.136222Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"cb8c47d19ac5c821","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:58.136428Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cb8c47d19ac5c821","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:58.849352Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"cb8c47d19ac5c821","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:52:58.866757Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"cb8c47d19ac5c821","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:53:01.65939Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"cb8c47d19ac5c821","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"6.383245ms"}
	{"level":"warn","ts":"2024-07-19T14:53:01.659476Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"af1cb735ec0c662e","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"6.480186ms"}
	{"level":"info","ts":"2024-07-19T14:53:01.65984Z","caller":"traceutil/trace.go:171","msg":"trace[138329489] transaction","detail":"{read_only:false; response_revision:2551; number_of_response:1; }","duration":"201.672791ms","start":"2024-07-19T14:53:01.45813Z","end":"2024-07-19T14:53:01.659803Z","steps":["trace[138329489] 'process raft request'  (duration: 201.551105ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T14:53:01.660672Z","caller":"traceutil/trace.go:171","msg":"trace[1698733029] linearizableReadLoop","detail":"{readStateIndex:2985; appliedIndex:2986; }","duration":"172.24473ms","start":"2024-07-19T14:53:01.48841Z","end":"2024-07-19T14:53:01.660655Z","steps":["trace[1698733029] 'read index received'  (duration: 172.241955ms)","trace[1698733029] 'applied index is now lower than readState.Index'  (duration: 2.247µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T14:53:01.661385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.943873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-999305-m03\" ","response":"range_response_count:1 size:5678"}
	{"level":"info","ts":"2024-07-19T14:53:01.66155Z","caller":"traceutil/trace.go:171","msg":"trace[1118881615] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-999305-m03; range_end:; response_count:1; response_revision:2551; }","duration":"173.215985ms","start":"2024-07-19T14:53:01.488318Z","end":"2024-07-19T14:53:01.661534Z","steps":["trace[1118881615] 'agreement among raft nodes before linearized reading'  (duration: 172.932282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T14:53:02.138253Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.250:2380/version","remote-member-id":"cb8c47d19ac5c821","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:53:02.138333Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"cb8c47d19ac5c821","error":"Get \"https://192.168.39.250:2380/version\": dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:53:03.849998Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"cb8c47d19ac5c821","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T14:53:03.867173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"cb8c47d19ac5c821","rtt":"0s","error":"dial tcp 192.168.39.250:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T14:53:05.856188Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.856326Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.856427Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.873133Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"cb8c47d19ac5c821","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T14:53:05.873307Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.879367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"cb8c47d19ac5c821","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T14:53:05.879502Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	
	
	==> kernel <==
	 14:53:56 up 15 min,  0 users,  load average: 0.62, 0.63, 0.39
	Linux ha-999305 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b] <==
	I0719 14:53:23.902371       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:53:33.906864       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:53:33.906993       1 main.go:303] handling current node
	I0719 14:53:33.907039       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:53:33.907049       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:53:33.907233       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:53:33.907264       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:53:33.907343       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:53:33.907375       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:53:43.909343       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:53:43.909465       1 main.go:303] handling current node
	I0719 14:53:43.909497       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:53:43.909516       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:53:43.909657       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:53:43.909679       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:53:43.909752       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:53:43.909779       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:53:53.905972       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:53:53.906098       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:53:53.906316       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:53:53.906373       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:53:53.906492       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:53:53.906527       1 main.go:303] handling current node
	I0719 14:53:53.906555       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:53:53.906576       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d] <==
	I0719 14:49:09.901111       1 main.go:303] handling current node
	I0719 14:49:19.892004       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:49:19.892072       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:49:19.892960       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:49:19.893020       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:49:19.893359       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:49:19.893399       1 main.go:303] handling current node
	I0719 14:49:19.893429       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:49:19.893439       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:49:29.893088       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:49:29.893197       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:49:29.893364       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:49:29.893398       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:49:29.893472       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:49:29.893481       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:49:29.893546       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:49:29.893574       1 main.go:303] handling current node
	I0719 14:49:39.897017       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:49:39.897065       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:49:39.897303       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:49:39.897329       1 main.go:303] handling current node
	I0719 14:49:39.897345       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:49:39.897350       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:49:39.897395       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:49:39.897414       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073] <==
	I0719 14:51:22.812148       1 options.go:221] external host was not specified, using 192.168.39.240
	I0719 14:51:22.834194       1 server.go:148] Version: v1.30.3
	I0719 14:51:22.835586       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:51:23.685363       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0719 14:51:23.688960       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 14:51:23.697592       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0719 14:51:23.697627       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0719 14:51:23.697814       1 instance.go:299] Using reconciler: lease
	W0719 14:51:43.678753       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0719 14:51:43.685420       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0719 14:51:43.701275       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4] <==
	I0719 14:52:05.255105       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0719 14:52:05.255142       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0719 14:52:05.256988       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0719 14:52:05.348756       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 14:52:05.352011       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 14:52:05.352264       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 14:52:05.380320       1 aggregator.go:165] initial CRD sync complete...
	I0719 14:52:05.380374       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 14:52:05.380381       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 14:52:05.380387       1 cache.go:39] Caches are synced for autoregister controller
	I0719 14:52:05.385954       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 14:52:05.399685       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 14:52:05.399764       1 policy_source.go:224] refreshing policies
	I0719 14:52:05.399950       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 14:52:05.441030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 14:52:05.441691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 14:52:05.442979       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 14:52:05.443063       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0719 14:52:05.461222       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.163 192.168.39.250]
	I0719 14:52:05.463169       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 14:52:05.482624       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 14:52:05.486091       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0719 14:52:05.492389       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0719 14:52:06.247621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 14:52:06.644513       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.163 192.168.39.240 192.168.39.250]
	
	
	==> kube-controller-manager [dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1] <==
	I0719 14:52:17.621368       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-999305-m03"
	I0719 14:52:17.621425       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-999305-m04"
	I0719 14:52:17.621303       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-999305-m02"
	I0719 14:52:17.621488       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 14:52:17.645943       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 14:52:17.699791       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 14:52:18.096461       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 14:52:18.096561       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 14:52:18.122738       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 14:52:28.068682       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.4312ms"
	I0719 14:52:28.069024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="136.201µs"
	I0719 14:52:30.625130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.310876ms"
	I0719 14:52:30.625599       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.717µs"
	I0719 14:52:30.671712       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-k4wfp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-k4wfp\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 14:52:30.672102       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4c4b425b-19f7-4c10-9617-b30d3a1a4d26", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-k4wfp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-k4wfp": the object has been modified; please apply your changes to the latest version and try again
	I0719 14:52:30.699683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.395253ms"
	I0719 14:52:30.703443       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-k4wfp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-k4wfp\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 14:52:30.704016       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4c4b425b-19f7-4c10-9617-b30d3a1a4d26", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-k4wfp EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-k4wfp": the object has been modified; please apply your changes to the latest version and try again
	I0719 14:52:30.704131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.216µs"
	I0719 14:52:52.412170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.559652ms"
	I0719 14:52:52.412280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.383µs"
	I0719 14:52:54.675482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.742µs"
	I0719 14:53:18.177983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.672835ms"
	I0719 14:53:18.178257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.517µs"
	I0719 14:53:47.421093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-999305-m04"
	
	
	==> kube-controller-manager [fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b] <==
	I0719 14:51:23.691624       1 serving.go:380] Generated self-signed cert in-memory
	I0719 14:51:24.107910       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0719 14:51:24.107954       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:51:24.109832       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 14:51:24.110078       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 14:51:24.110294       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 14:51:24.110493       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0719 14:51:44.708367       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.240:8443/healthz\": dial tcp 192.168.39.240:8443: connect: connection refused"
	
	
	==> kube-proxy [3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859] <==
	E0719 14:48:38.200538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:41.271540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:41.271615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:41.271701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:41.271738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:44.343206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:44.343270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:47.416064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:47.416190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:50.486739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:50.488231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:50.488125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:50.488365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:56.633323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:56.633714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:02.775358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:02.775540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:05.846774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:05.847064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:18.135499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:18.135559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:21.206810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:21.207061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:27.351355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:27.351697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2] <==
	I0719 14:51:24.180579       1 server_linux.go:69] "Using iptables proxy"
	E0719 14:51:27.158434       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:30.231645       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:33.303284       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:39.446300       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:48.662776       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0719 14:52:07.832146       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	I0719 14:52:07.869478       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 14:52:07.869528       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 14:52:07.869544       1 server_linux.go:165] "Using iptables Proxier"
	I0719 14:52:07.872272       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 14:52:07.872503       1 server.go:872] "Version info" version="v1.30.3"
	I0719 14:52:07.872723       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:52:07.874408       1 config.go:192] "Starting service config controller"
	I0719 14:52:07.874506       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 14:52:07.874572       1 config.go:101] "Starting endpoint slice config controller"
	I0719 14:52:07.874592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 14:52:07.875305       1 config.go:319] "Starting node config controller"
	I0719 14:52:07.875343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 14:52:07.975983       1 shared_informer.go:320] Caches are synced for service config
	I0719 14:52:07.976507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 14:52:07.976568       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763] <==
	W0719 14:52:00.398208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.240:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:00.398294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.240:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:00.649282       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:00.649357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:00.750529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:00.750608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:01.470603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.240:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:01.470653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.240:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:01.983475       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.240:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:01.983574       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.240:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:02.234433       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:02.234509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:02.395474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:02.395548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:05.272626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:52:05.274985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:52:05.275499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 14:52:05.275606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 14:52:05.275741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 14:52:05.275815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 14:52:05.276038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 14:52:05.276130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 14:52:05.276269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:52:05.276353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0719 14:52:15.113703       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a] <==
	W0719 14:49:37.108746       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:49:37.108833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:49:37.124702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:49:37.124811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 14:49:37.138068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 14:49:37.138151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 14:49:37.178638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 14:49:37.178751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 14:49:37.341183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:49:37.341285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:49:37.592984       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 14:49:37.593094       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 14:49:40.023498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 14:49:40.023601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 14:49:40.481803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 14:49:40.481836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 14:49:40.842009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:49:40.842112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 14:49:40.862343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 14:49:40.862375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 14:49:40.898799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 14:49:40.898971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 14:49:41.067927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 14:49:41.068007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 14:49:41.144659       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 14:52:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:52:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:52:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:52:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:52:06 ha-999305 kubelet[1369]: I0719 14:52:06.342497    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:06 ha-999305 kubelet[1369]: E0719 14:52:06.342682    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5dc00743-8980-495b-9a44-c3d3d42829f6)\"" pod="kube-system/storage-provisioner" podUID="5dc00743-8980-495b-9a44-c3d3d42829f6"
	Jul 19 14:52:07 ha-999305 kubelet[1369]: E0719 14:52:07.094238    1369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-999305?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 19 14:52:07 ha-999305 kubelet[1369]: I0719 14:52:07.094368    1369 status_manager.go:853] "Failed to get status for pod" podUID="5dc00743-8980-495b-9a44-c3d3d42829f6" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 14:52:07 ha-999305 kubelet[1369]: E0719 14:52:07.094734    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-999305\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 14:52:19 ha-999305 kubelet[1369]: I0719 14:52:19.342242    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:19 ha-999305 kubelet[1369]: E0719 14:52:19.343635    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5dc00743-8980-495b-9a44-c3d3d42829f6)\"" pod="kube-system/storage-provisioner" podUID="5dc00743-8980-495b-9a44-c3d3d42829f6"
	Jul 19 14:52:29 ha-999305 kubelet[1369]: I0719 14:52:29.491044    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-2rfw6" podStartSLOduration=568.948402777 podStartE2EDuration="9m31.490991185s" podCreationTimestamp="2024-07-19 14:42:58 +0000 UTC" firstStartedPulling="2024-07-19 14:43:01.202706735 +0000 UTC m=+236.045700765" lastFinishedPulling="2024-07-19 14:43:03.745295151 +0000 UTC m=+238.588289173" observedRunningTime="2024-07-19 14:43:04.337387848 +0000 UTC m=+239.180381888" watchObservedRunningTime="2024-07-19 14:52:29.490991185 +0000 UTC m=+804.333985223"
	Jul 19 14:52:32 ha-999305 kubelet[1369]: I0719 14:52:32.342639    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:32 ha-999305 kubelet[1369]: E0719 14:52:32.343399    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5dc00743-8980-495b-9a44-c3d3d42829f6)\"" pod="kube-system/storage-provisioner" podUID="5dc00743-8980-495b-9a44-c3d3d42829f6"
	Jul 19 14:52:45 ha-999305 kubelet[1369]: I0719 14:52:45.343031    1369 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-999305" podUID="81ac3b87-e88d-4ee9-98ca-5c098350c157"
	Jul 19 14:52:45 ha-999305 kubelet[1369]: I0719 14:52:45.345814    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:45 ha-999305 kubelet[1369]: E0719 14:52:45.346256    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5dc00743-8980-495b-9a44-c3d3d42829f6)\"" pod="kube-system/storage-provisioner" podUID="5dc00743-8980-495b-9a44-c3d3d42829f6"
	Jul 19 14:52:45 ha-999305 kubelet[1369]: I0719 14:52:45.376653    1369 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-999305"
	Jul 19 14:52:59 ha-999305 kubelet[1369]: I0719 14:52:59.342385    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:59 ha-999305 kubelet[1369]: I0719 14:52:59.620472    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-999305" podStartSLOduration=14.620448461 podStartE2EDuration="14.620448461s" podCreationTimestamp="2024-07-19 14:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 14:52:55.364179008 +0000 UTC m=+830.207173048" watchObservedRunningTime="2024-07-19 14:52:59.620448461 +0000 UTC m=+834.463442499"
	Jul 19 14:53:05 ha-999305 kubelet[1369]: E0719 14:53:05.411572    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:53:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:53:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:53:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:53:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 14:53:54.912412   30503 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-3847/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-999305 -n ha-999305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-999305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (379.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 stop -v=7 --alsologtostderr
E0719 14:54:28.744017   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 stop -v=7 --alsologtostderr: exit status 82 (2m0.44673978s)

                                                
                                                
-- stdout --
	* Stopping node "ha-999305-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:54:14.656076   30909 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:54:14.656371   30909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:54:14.656381   30909 out.go:304] Setting ErrFile to fd 2...
	I0719 14:54:14.656385   30909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:54:14.656589   30909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:54:14.656806   30909 out.go:298] Setting JSON to false
	I0719 14:54:14.656875   30909 mustload.go:65] Loading cluster: ha-999305
	I0719 14:54:14.657226   30909 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:54:14.657338   30909 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:54:14.657545   30909 mustload.go:65] Loading cluster: ha-999305
	I0719 14:54:14.657671   30909 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:54:14.657693   30909 stop.go:39] StopHost: ha-999305-m04
	I0719 14:54:14.658044   30909 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:54:14.658085   30909 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:54:14.673796   30909 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0719 14:54:14.674267   30909 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:54:14.674902   30909 main.go:141] libmachine: Using API Version  1
	I0719 14:54:14.674928   30909 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:54:14.675209   30909 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:54:14.677497   30909 out.go:177] * Stopping node "ha-999305-m04"  ...
	I0719 14:54:14.679425   30909 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 14:54:14.679479   30909 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:54:14.679673   30909 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 14:54:14.679704   30909 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:54:14.682407   30909 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:54:14.682757   30909 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:53:42 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:54:14.682791   30909 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:54:14.682863   30909 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:54:14.683015   30909 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:54:14.683150   30909 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:54:14.683305   30909 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	I0719 14:54:14.765125   30909 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 14:54:14.818607   30909 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 14:54:14.871851   30909 main.go:141] libmachine: Stopping "ha-999305-m04"...
	I0719 14:54:14.871912   30909 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:54:14.873541   30909 main.go:141] libmachine: (ha-999305-m04) Calling .Stop
	I0719 14:54:14.876856   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 0/120
	I0719 14:54:15.878525   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 1/120
	I0719 14:54:16.880616   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 2/120
	I0719 14:54:17.881930   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 3/120
	I0719 14:54:18.883261   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 4/120
	I0719 14:54:19.885143   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 5/120
	I0719 14:54:20.887175   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 6/120
	I0719 14:54:21.888604   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 7/120
	I0719 14:54:22.890030   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 8/120
	I0719 14:54:23.891263   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 9/120
	I0719 14:54:24.893289   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 10/120
	I0719 14:54:25.894537   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 11/120
	I0719 14:54:26.895700   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 12/120
	I0719 14:54:27.896972   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 13/120
	I0719 14:54:28.898314   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 14/120
	I0719 14:54:29.899919   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 15/120
	I0719 14:54:30.901399   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 16/120
	I0719 14:54:31.902583   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 17/120
	I0719 14:54:32.903922   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 18/120
	I0719 14:54:33.905376   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 19/120
	I0719 14:54:34.906953   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 20/120
	I0719 14:54:35.908719   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 21/120
	I0719 14:54:36.910072   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 22/120
	I0719 14:54:37.911536   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 23/120
	I0719 14:54:38.912922   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 24/120
	I0719 14:54:39.914717   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 25/120
	I0719 14:54:40.916832   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 26/120
	I0719 14:54:41.918037   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 27/120
	I0719 14:54:42.919251   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 28/120
	I0719 14:54:43.920469   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 29/120
	I0719 14:54:44.921806   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 30/120
	I0719 14:54:45.922978   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 31/120
	I0719 14:54:46.924559   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 32/120
	I0719 14:54:47.925773   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 33/120
	I0719 14:54:48.927173   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 34/120
	I0719 14:54:49.929107   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 35/120
	I0719 14:54:50.930539   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 36/120
	I0719 14:54:51.932624   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 37/120
	I0719 14:54:52.933973   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 38/120
	I0719 14:54:53.935233   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 39/120
	I0719 14:54:54.936668   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 40/120
	I0719 14:54:55.938490   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 41/120
	I0719 14:54:56.939604   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 42/120
	I0719 14:54:57.940851   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 43/120
	I0719 14:54:58.942218   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 44/120
	I0719 14:54:59.943979   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 45/120
	I0719 14:55:00.945247   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 46/120
	I0719 14:55:01.946452   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 47/120
	I0719 14:55:02.947801   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 48/120
	I0719 14:55:03.948960   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 49/120
	I0719 14:55:04.951135   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 50/120
	I0719 14:55:05.952362   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 51/120
	I0719 14:55:06.954210   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 52/120
	I0719 14:55:07.955545   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 53/120
	I0719 14:55:08.957086   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 54/120
	I0719 14:55:09.958911   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 55/120
	I0719 14:55:10.960758   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 56/120
	I0719 14:55:11.962018   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 57/120
	I0719 14:55:12.963496   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 58/120
	I0719 14:55:13.964751   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 59/120
	I0719 14:55:14.966843   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 60/120
	I0719 14:55:15.968064   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 61/120
	I0719 14:55:16.969900   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 62/120
	I0719 14:55:17.971040   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 63/120
	I0719 14:55:18.972511   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 64/120
	I0719 14:55:19.973988   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 65/120
	I0719 14:55:20.975431   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 66/120
	I0719 14:55:21.976818   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 67/120
	I0719 14:55:22.978202   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 68/120
	I0719 14:55:23.979514   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 69/120
	I0719 14:55:24.981367   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 70/120
	I0719 14:55:25.982674   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 71/120
	I0719 14:55:26.983820   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 72/120
	I0719 14:55:27.985177   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 73/120
	I0719 14:55:28.986579   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 74/120
	I0719 14:55:29.988283   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 75/120
	I0719 14:55:30.989863   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 76/120
	I0719 14:55:31.991026   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 77/120
	I0719 14:55:32.992580   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 78/120
	I0719 14:55:33.993817   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 79/120
	I0719 14:55:34.995533   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 80/120
	I0719 14:55:35.996821   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 81/120
	I0719 14:55:36.998118   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 82/120
	I0719 14:55:37.999487   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 83/120
	I0719 14:55:39.000944   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 84/120
	I0719 14:55:40.003006   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 85/120
	I0719 14:55:41.005081   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 86/120
	I0719 14:55:42.007077   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 87/120
	I0719 14:55:43.008372   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 88/120
	I0719 14:55:44.009710   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 89/120
	I0719 14:55:45.011131   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 90/120
	I0719 14:55:46.013182   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 91/120
	I0719 14:55:47.014677   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 92/120
	I0719 14:55:48.016631   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 93/120
	I0719 14:55:49.017894   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 94/120
	I0719 14:55:50.019191   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 95/120
	I0719 14:55:51.020455   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 96/120
	I0719 14:55:52.021794   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 97/120
	I0719 14:55:53.022992   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 98/120
	I0719 14:55:54.024663   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 99/120
	I0719 14:55:55.026415   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 100/120
	I0719 14:55:56.028547   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 101/120
	I0719 14:55:57.030041   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 102/120
	I0719 14:55:58.031328   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 103/120
	I0719 14:55:59.032547   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 104/120
	I0719 14:56:00.034080   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 105/120
	I0719 14:56:01.035455   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 106/120
	I0719 14:56:02.036851   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 107/120
	I0719 14:56:03.038046   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 108/120
	I0719 14:56:04.039261   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 109/120
	I0719 14:56:05.041180   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 110/120
	I0719 14:56:06.042642   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 111/120
	I0719 14:56:07.044583   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 112/120
	I0719 14:56:08.046044   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 113/120
	I0719 14:56:09.047668   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 114/120
	I0719 14:56:10.049374   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 115/120
	I0719 14:56:11.050702   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 116/120
	I0719 14:56:12.052569   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 117/120
	I0719 14:56:13.053675   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 118/120
	I0719 14:56:14.055190   30909 main.go:141] libmachine: (ha-999305-m04) Waiting for machine to stop 119/120
	I0719 14:56:15.055857   30909 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 14:56:15.055909   30909 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 14:56:15.057982   30909 out.go:177] 
	W0719 14:56:15.059485   30909 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 14:56:15.059504   30909 out.go:239] * 
	* 
	W0719 14:56:15.061752   30909 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 14:56:15.062936   30909 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-999305 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr: exit status 3 (19.066118129s)

                                                
                                                
-- stdout --
	ha-999305
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-999305-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:56:15.107320   31349 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:56:15.107606   31349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:56:15.107616   31349 out.go:304] Setting ErrFile to fd 2...
	I0719 14:56:15.107623   31349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:56:15.107811   31349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:56:15.107994   31349 out.go:298] Setting JSON to false
	I0719 14:56:15.108029   31349 mustload.go:65] Loading cluster: ha-999305
	I0719 14:56:15.108067   31349 notify.go:220] Checking for updates...
	I0719 14:56:15.108421   31349 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:56:15.108442   31349 status.go:255] checking status of ha-999305 ...
	I0719 14:56:15.108876   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.108953   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.126829   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0719 14:56:15.127236   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.127863   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.127902   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.128282   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.128484   31349 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:56:15.130052   31349 status.go:330] ha-999305 host status = "Running" (err=<nil>)
	I0719 14:56:15.130070   31349 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:56:15.130423   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.130458   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.146962   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0719 14:56:15.147292   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.147688   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.147705   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.147984   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.148156   31349 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:56:15.150558   31349 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:56:15.150945   31349 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:56:15.150977   31349 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:56:15.151071   31349 host.go:66] Checking if "ha-999305" exists ...
	I0719 14:56:15.151340   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.151380   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.165690   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0719 14:56:15.166108   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.166627   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.166660   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.166961   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.167139   31349 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:56:15.167311   31349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:56:15.167335   31349 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:56:15.170165   31349 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:56:15.170632   31349 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:56:15.170669   31349 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:56:15.170783   31349 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:56:15.170947   31349 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:56:15.171077   31349 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:56:15.171244   31349 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:56:15.258178   31349 ssh_runner.go:195] Run: systemctl --version
	I0719 14:56:15.265574   31349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:56:15.282760   31349 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:56:15.282783   31349 api_server.go:166] Checking apiserver status ...
	I0719 14:56:15.282812   31349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:56:15.299860   31349 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5060/cgroup
	W0719 14:56:15.309335   31349 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5060/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:56:15.309374   31349 ssh_runner.go:195] Run: ls
	I0719 14:56:15.313670   31349 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:56:15.320052   31349 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:56:15.320074   31349 status.go:422] ha-999305 apiserver status = Running (err=<nil>)
	I0719 14:56:15.320086   31349 status.go:257] ha-999305 status: &{Name:ha-999305 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:56:15.320105   31349 status.go:255] checking status of ha-999305-m02 ...
	I0719 14:56:15.320420   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.320465   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.335559   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44479
	I0719 14:56:15.335985   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.336392   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.336413   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.336716   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.336939   31349 main.go:141] libmachine: (ha-999305-m02) Calling .GetState
	I0719 14:56:15.338366   31349 status.go:330] ha-999305-m02 host status = "Running" (err=<nil>)
	I0719 14:56:15.338389   31349 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:56:15.338661   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.338703   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.354129   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43663
	I0719 14:56:15.354562   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.355000   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.355018   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.355338   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.355526   31349 main.go:141] libmachine: (ha-999305-m02) Calling .GetIP
	I0719 14:56:15.357999   31349 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:56:15.358396   31349 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:51:28 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:56:15.358416   31349 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:56:15.358523   31349 host.go:66] Checking if "ha-999305-m02" exists ...
	I0719 14:56:15.358818   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.358858   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.372868   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0719 14:56:15.373239   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.373700   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.373722   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.373982   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.374158   31349 main.go:141] libmachine: (ha-999305-m02) Calling .DriverName
	I0719 14:56:15.374365   31349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:56:15.374383   31349 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHHostname
	I0719 14:56:15.377045   31349 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:56:15.377441   31349 main.go:141] libmachine: (ha-999305-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:f6:ba", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:51:28 +0000 UTC Type:0 Mac:52:54:00:8f:f6:ba Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-999305-m02 Clientid:01:52:54:00:8f:f6:ba}
	I0719 14:56:15.377468   31349 main.go:141] libmachine: (ha-999305-m02) DBG | domain ha-999305-m02 has defined IP address 192.168.39.163 and MAC address 52:54:00:8f:f6:ba in network mk-ha-999305
	I0719 14:56:15.377614   31349 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHPort
	I0719 14:56:15.377763   31349 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHKeyPath
	I0719 14:56:15.377915   31349 main.go:141] libmachine: (ha-999305-m02) Calling .GetSSHUsername
	I0719 14:56:15.378016   31349 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m02/id_rsa Username:docker}
	I0719 14:56:15.463111   31349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 14:56:15.479141   31349 kubeconfig.go:125] found "ha-999305" server: "https://192.168.39.254:8443"
	I0719 14:56:15.479164   31349 api_server.go:166] Checking apiserver status ...
	I0719 14:56:15.479196   31349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 14:56:15.494248   31349 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1572/cgroup
	W0719 14:56:15.503371   31349 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1572/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 14:56:15.503425   31349 ssh_runner.go:195] Run: ls
	I0719 14:56:15.508483   31349 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 14:56:15.513474   31349 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 14:56:15.513498   31349 status.go:422] ha-999305-m02 apiserver status = Running (err=<nil>)
	I0719 14:56:15.513507   31349 status.go:257] ha-999305-m02 status: &{Name:ha-999305-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 14:56:15.513534   31349 status.go:255] checking status of ha-999305-m04 ...
	I0719 14:56:15.513852   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.513889   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.528387   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35503
	I0719 14:56:15.528807   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.529264   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.529286   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.529628   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.529801   31349 main.go:141] libmachine: (ha-999305-m04) Calling .GetState
	I0719 14:56:15.531459   31349 status.go:330] ha-999305-m04 host status = "Running" (err=<nil>)
	I0719 14:56:15.531473   31349 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:56:15.531748   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.531781   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.545802   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41325
	I0719 14:56:15.546170   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.546627   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.546651   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.546960   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.547134   31349 main.go:141] libmachine: (ha-999305-m04) Calling .GetIP
	I0719 14:56:15.549676   31349 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:56:15.550058   31349 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:53:42 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:56:15.550082   31349 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:56:15.550382   31349 host.go:66] Checking if "ha-999305-m04" exists ...
	I0719 14:56:15.550667   31349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:56:15.550717   31349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:56:15.565538   31349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I0719 14:56:15.565919   31349 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:56:15.566339   31349 main.go:141] libmachine: Using API Version  1
	I0719 14:56:15.566360   31349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:56:15.566657   31349 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:56:15.566853   31349 main.go:141] libmachine: (ha-999305-m04) Calling .DriverName
	I0719 14:56:15.567060   31349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 14:56:15.567083   31349 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHHostname
	I0719 14:56:15.569569   31349 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:56:15.569919   31349 main.go:141] libmachine: (ha-999305-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:3a:e8", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:53:42 +0000 UTC Type:0 Mac:52:54:00:db:3a:e8 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-999305-m04 Clientid:01:52:54:00:db:3a:e8}
	I0719 14:56:15.569935   31349 main.go:141] libmachine: (ha-999305-m04) DBG | domain ha-999305-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:db:3a:e8 in network mk-ha-999305
	I0719 14:56:15.570076   31349 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHPort
	I0719 14:56:15.570227   31349 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHKeyPath
	I0719 14:56:15.570375   31349 main.go:141] libmachine: (ha-999305-m04) Calling .GetSSHUsername
	I0719 14:56:15.570498   31349 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305-m04/id_rsa Username:docker}
	W0719 14:56:34.130528   31349 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0719 14:56:34.130644   31349 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0719 14:56:34.130686   31349 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0719 14:56:34.130698   31349 status.go:257] ha-999305-m04 status: &{Name:ha-999305-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0719 14:56:34.130727   31349 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-999305 -n ha-999305
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-999305 logs -n 25: (1.762113055s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m04 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp testdata/cp-test.txt                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305:/home/docker/cp-test_ha-999305-m04_ha-999305.txt                      |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305 sudo cat                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305.txt                                |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m02:/home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m02 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m03:/home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n                                                                | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | ha-999305-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-999305 ssh -n ha-999305-m03 sudo cat                                         | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC | 19 Jul 24 14:44 UTC |
	|         | /home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-999305 node stop m02 -v=7                                                    | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:44 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-999305 node start m02 -v=7                                                   | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:46 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-999305 -v=7                                                          | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-999305 -v=7                                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:47 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-999305 --wait=true -v=7                                                   | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:49 UTC | 19 Jul 24 14:53 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-999305                                                               | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:53 UTC |                     |
	| node    | ha-999305 node delete m03 -v=7                                                  | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:53 UTC | 19 Jul 24 14:54 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-999305 stop -v=7                                                             | ha-999305 | jenkins | v1.33.1 | 19 Jul 24 14:54 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:49:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:49:40.105178   29144 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:49:40.105299   29144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:49:40.105308   29144 out.go:304] Setting ErrFile to fd 2...
	I0719 14:49:40.105313   29144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:49:40.105496   29144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:49:40.106030   29144 out.go:298] Setting JSON to false
	I0719 14:49:40.106945   29144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1926,"bootTime":1721398654,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:49:40.106999   29144 start.go:139] virtualization: kvm guest
	I0719 14:49:40.109542   29144 out.go:177] * [ha-999305] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:49:40.111084   29144 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:49:40.111117   29144 notify.go:220] Checking for updates...
	I0719 14:49:40.114041   29144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:49:40.115378   29144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:49:40.116728   29144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:49:40.118107   29144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:49:40.119301   29144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:49:40.120850   29144 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:49:40.120962   29144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:49:40.121365   29144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:49:40.121422   29144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:49:40.136080   29144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35469
	I0719 14:49:40.136542   29144 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:49:40.137035   29144 main.go:141] libmachine: Using API Version  1
	I0719 14:49:40.137055   29144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:49:40.137437   29144 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:49:40.137618   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:49:40.173709   29144 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 14:49:40.175045   29144 start.go:297] selected driver: kvm2
	I0719 14:49:40.175071   29144 start.go:901] validating driver "kvm2" against &{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:49:40.175218   29144 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:49:40.175698   29144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:49:40.175785   29144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:49:40.191425   29144 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:49:40.192147   29144 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 14:49:40.192178   29144 cni.go:84] Creating CNI manager for ""
	I0719 14:49:40.192184   29144 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 14:49:40.192244   29144 start.go:340] cluster config:
	{Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:49:40.192393   29144 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:49:40.194356   29144 out.go:177] * Starting "ha-999305" primary control-plane node in "ha-999305" cluster
	I0719 14:49:40.195722   29144 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:49:40.195763   29144 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:49:40.195773   29144 cache.go:56] Caching tarball of preloaded images
	I0719 14:49:40.195843   29144 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 14:49:40.195854   29144 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 14:49:40.195975   29144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/config.json ...
	I0719 14:49:40.196182   29144 start.go:360] acquireMachinesLock for ha-999305: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 14:49:40.196227   29144 start.go:364] duration metric: took 26.699µs to acquireMachinesLock for "ha-999305"
	I0719 14:49:40.196242   29144 start.go:96] Skipping create...Using existing machine configuration
	I0719 14:49:40.196246   29144 fix.go:54] fixHost starting: 
	I0719 14:49:40.196493   29144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:49:40.196526   29144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:49:40.211853   29144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0719 14:49:40.212348   29144 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:49:40.212800   29144 main.go:141] libmachine: Using API Version  1
	I0719 14:49:40.212826   29144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:49:40.213108   29144 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:49:40.213296   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:49:40.213473   29144 main.go:141] libmachine: (ha-999305) Calling .GetState
	I0719 14:49:40.215328   29144 fix.go:112] recreateIfNeeded on ha-999305: state=Running err=<nil>
	W0719 14:49:40.215346   29144 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 14:49:40.217258   29144 out.go:177] * Updating the running kvm2 "ha-999305" VM ...
	I0719 14:49:40.218473   29144 machine.go:94] provisionDockerMachine start ...
	I0719 14:49:40.218492   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:49:40.218701   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.221061   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.221505   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.221531   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.221711   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.221877   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.222019   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.222179   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.222394   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:40.222607   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:40.222619   29144 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 14:49:40.327399   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305
	
	I0719 14:49:40.327427   29144 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:49:40.327660   29144 buildroot.go:166] provisioning hostname "ha-999305"
	I0719 14:49:40.327688   29144 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:49:40.327887   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.330384   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.330865   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.330885   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.331058   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.331238   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.331385   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.331504   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.331627   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:40.331865   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:40.331884   29144 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-999305 && echo "ha-999305" | sudo tee /etc/hostname
	I0719 14:49:40.451751   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-999305
	
	I0719 14:49:40.451776   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.454251   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.454709   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.454737   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.454931   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.455132   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.455335   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.455505   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.455644   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:40.455800   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:40.455814   29144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-999305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-999305/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-999305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 14:49:40.559254   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 14:49:40.559282   29144 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 14:49:40.559298   29144 buildroot.go:174] setting up certificates
	I0719 14:49:40.559305   29144 provision.go:84] configureAuth start
	I0719 14:49:40.559313   29144 main.go:141] libmachine: (ha-999305) Calling .GetMachineName
	I0719 14:49:40.559560   29144 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:49:40.562143   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.562606   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.562631   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.562744   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.564772   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.565162   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.565183   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.565325   29144 provision.go:143] copyHostCerts
	I0719 14:49:40.565363   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:49:40.565410   29144 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 14:49:40.565422   29144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 14:49:40.565489   29144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 14:49:40.565574   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:49:40.565599   29144 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 14:49:40.565610   29144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 14:49:40.565637   29144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 14:49:40.565692   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:49:40.565715   29144 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 14:49:40.565721   29144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 14:49:40.565742   29144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 14:49:40.565798   29144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.ha-999305 san=[127.0.0.1 192.168.39.240 ha-999305 localhost minikube]
	I0719 14:49:40.845539   29144 provision.go:177] copyRemoteCerts
	I0719 14:49:40.845596   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 14:49:40.845630   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:40.848385   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.848705   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:40.848738   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:40.848907   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:40.849129   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:40.849292   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:40.849439   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:49:40.933577   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 14:49:40.933642   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 14:49:40.963389   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 14:49:40.963486   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0719 14:49:40.996244   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 14:49:40.996321   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 14:49:41.022200   29144 provision.go:87] duration metric: took 462.869786ms to configureAuth
	I0719 14:49:41.022226   29144 buildroot.go:189] setting minikube options for container-runtime
	I0719 14:49:41.022460   29144 config.go:182] Loaded profile config "ha-999305": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:49:41.022539   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:49:41.025092   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:41.025445   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:49:41.025465   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:49:41.025637   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:49:41.025856   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:41.026012   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:49:41.026156   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:49:41.026320   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:49:41.026489   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:49:41.026503   29144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 14:51:11.875061   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 14:51:11.875094   29144 machine.go:97] duration metric: took 1m31.656610142s to provisionDockerMachine
	I0719 14:51:11.875107   29144 start.go:293] postStartSetup for "ha-999305" (driver="kvm2")
	I0719 14:51:11.875118   29144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 14:51:11.875134   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:11.875513   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 14:51:11.875545   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:11.878578   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:11.878994   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:11.879019   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:11.879148   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:11.879332   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:11.879486   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:11.879615   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:51:11.966200   29144 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 14:51:11.970465   29144 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 14:51:11.970491   29144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 14:51:11.970549   29144 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 14:51:11.970639   29144 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 14:51:11.970649   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 14:51:11.970747   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 14:51:11.980960   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:51:12.005929   29144 start.go:296] duration metric: took 130.807251ms for postStartSetup
	I0719 14:51:12.005983   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.006313   29144 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0719 14:51:12.006340   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.009115   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.009479   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.009501   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.009629   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.009819   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.009985   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.010282   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	W0719 14:51:12.093343   29144 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0719 14:51:12.093372   29144 fix.go:56] duration metric: took 1m31.897125382s for fixHost
	I0719 14:51:12.093393   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.096574   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.097014   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.097040   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.097188   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.097373   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.097542   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.097697   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.097867   29144 main.go:141] libmachine: Using SSH client type: native
	I0719 14:51:12.098071   29144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0719 14:51:12.098086   29144 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 14:51:12.203077   29144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721400672.166782476
	
	I0719 14:51:12.203098   29144 fix.go:216] guest clock: 1721400672.166782476
	I0719 14:51:12.203104   29144 fix.go:229] Guest: 2024-07-19 14:51:12.166782476 +0000 UTC Remote: 2024-07-19 14:51:12.093379426 +0000 UTC m=+92.020235625 (delta=73.40305ms)
	I0719 14:51:12.203139   29144 fix.go:200] guest clock delta is within tolerance: 73.40305ms
	I0719 14:51:12.203150   29144 start.go:83] releasing machines lock for "ha-999305", held for 1m32.006913425s
	I0719 14:51:12.203176   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.203438   29144 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:51:12.205781   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.206156   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.206178   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.206357   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.206894   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.207057   29144 main.go:141] libmachine: (ha-999305) Calling .DriverName
	I0719 14:51:12.207127   29144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 14:51:12.207176   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.207244   29144 ssh_runner.go:195] Run: cat /version.json
	I0719 14:51:12.207266   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHHostname
	I0719 14:51:12.210002   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210028   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210394   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.210425   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210450   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:12.210467   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:12.210546   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.210730   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.210758   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHPort
	I0719 14:51:12.210919   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.210930   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHKeyPath
	I0719 14:51:12.211106   29144 main.go:141] libmachine: (ha-999305) Calling .GetSSHUsername
	I0719 14:51:12.211092   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:51:12.211259   29144 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/ha-999305/id_rsa Username:docker}
	I0719 14:51:12.287688   29144 ssh_runner.go:195] Run: systemctl --version
	I0719 14:51:12.313613   29144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 14:51:12.473291   29144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 14:51:12.484625   29144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 14:51:12.484697   29144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 14:51:12.495546   29144 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 14:51:12.495572   29144 start.go:495] detecting cgroup driver to use...
	I0719 14:51:12.495642   29144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 14:51:12.514801   29144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 14:51:12.529894   29144 docker.go:217] disabling cri-docker service (if available) ...
	I0719 14:51:12.529951   29144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 14:51:12.544809   29144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 14:51:12.559050   29144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 14:51:12.709134   29144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 14:51:12.868990   29144 docker.go:233] disabling docker service ...
	I0719 14:51:12.869072   29144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 14:51:12.887504   29144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 14:51:12.902635   29144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 14:51:13.050912   29144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 14:51:13.216101   29144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 14:51:13.231503   29144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 14:51:13.250302   29144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 14:51:13.250359   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.261245   29144 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 14:51:13.261291   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.272145   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.283029   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.294785   29144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 14:51:13.306215   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.316900   29144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.328175   29144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 14:51:13.338756   29144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 14:51:13.348275   29144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 14:51:13.358022   29144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:51:13.500550   29144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 14:51:16.528461   29144 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.027870368s)
	I0719 14:51:16.528488   29144 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 14:51:16.528534   29144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 14:51:16.533546   29144 start.go:563] Will wait 60s for crictl version
	I0719 14:51:16.533603   29144 ssh_runner.go:195] Run: which crictl
	I0719 14:51:16.537406   29144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 14:51:16.573517   29144 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 14:51:16.573591   29144 ssh_runner.go:195] Run: crio --version
	I0719 14:51:16.603842   29144 ssh_runner.go:195] Run: crio --version
	I0719 14:51:16.636051   29144 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 14:51:16.637775   29144 main.go:141] libmachine: (ha-999305) Calling .GetIP
	I0719 14:51:16.640426   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:16.640849   29144 main.go:141] libmachine: (ha-999305) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:55:82", ip: ""} in network mk-ha-999305: {Iface:virbr1 ExpiryTime:2024-07-19 15:38:41 +0000 UTC Type:0 Mac:52:54:00:c3:55:82 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-999305 Clientid:01:52:54:00:c3:55:82}
	I0719 14:51:16.640872   29144 main.go:141] libmachine: (ha-999305) DBG | domain ha-999305 has defined IP address 192.168.39.240 and MAC address 52:54:00:c3:55:82 in network mk-ha-999305
	I0719 14:51:16.641120   29144 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 14:51:16.645986   29144 kubeadm.go:883] updating cluster {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 14:51:16.646105   29144 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:51:16.646148   29144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:51:16.688344   29144 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:51:16.688367   29144 crio.go:433] Images already preloaded, skipping extraction
	I0719 14:51:16.688409   29144 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 14:51:16.728168   29144 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 14:51:16.728187   29144 cache_images.go:84] Images are preloaded, skipping loading
	I0719 14:51:16.728198   29144 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.30.3 crio true true} ...
	I0719 14:51:16.728318   29144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-999305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 14:51:16.728392   29144 ssh_runner.go:195] Run: crio config
	I0719 14:51:16.778642   29144 cni.go:84] Creating CNI manager for ""
	I0719 14:51:16.778663   29144 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 14:51:16.778674   29144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 14:51:16.778707   29144 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-999305 NodeName:ha-999305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 14:51:16.778877   29144 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-999305"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 14:51:16.778901   29144 kube-vip.go:115] generating kube-vip config ...
	I0719 14:51:16.778948   29144 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 14:51:16.790748   29144 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 14:51:16.790879   29144 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 14:51:16.790931   29144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 14:51:16.800505   29144 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 14:51:16.800576   29144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 14:51:16.810375   29144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 14:51:16.827616   29144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 14:51:16.844601   29144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 14:51:16.861415   29144 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 14:51:16.879337   29144 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 14:51:16.883332   29144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 14:51:17.029861   29144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 14:51:17.044852   29144 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305 for IP: 192.168.39.240
	I0719 14:51:17.044876   29144 certs.go:194] generating shared ca certs ...
	I0719 14:51:17.044897   29144 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:51:17.045072   29144 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 14:51:17.045125   29144 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 14:51:17.045136   29144 certs.go:256] generating profile certs ...
	I0719 14:51:17.045225   29144 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/client.key
	I0719 14:51:17.045258   29144 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f
	I0719 14:51:17.045276   29144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.163 192.168.39.250 192.168.39.254]
	I0719 14:51:17.130449   29144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f ...
	I0719 14:51:17.130478   29144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f: {Name:mk555a387d73727c036dcc44a211fbe6bf73fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:51:17.130653   29144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f ...
	I0719 14:51:17.130664   29144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f: {Name:mk56197489fc1e516512c7ab5eb629df8c3584da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:51:17.130740   29144 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt.fb6d515f -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt
	I0719 14:51:17.130880   29144 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key.fb6d515f -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key
	I0719 14:51:17.130995   29144 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key
	I0719 14:51:17.131009   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 14:51:17.131023   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 14:51:17.131036   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 14:51:17.131049   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 14:51:17.131064   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 14:51:17.131077   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 14:51:17.131093   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 14:51:17.131104   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 14:51:17.131147   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 14:51:17.131176   29144 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 14:51:17.131186   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 14:51:17.131206   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 14:51:17.131226   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 14:51:17.131248   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 14:51:17.131283   29144 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 14:51:17.131307   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.131320   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.131332   29144 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.131889   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 14:51:17.158384   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 14:51:17.187884   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 14:51:17.212975   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 14:51:17.236097   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 14:51:17.259429   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 14:51:17.283132   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 14:51:17.307510   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/ha-999305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 14:51:17.331508   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 14:51:17.355030   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 14:51:17.378712   29144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 14:51:17.403275   29144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 14:51:17.420223   29144 ssh_runner.go:195] Run: openssl version
	I0719 14:51:17.426598   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 14:51:17.437802   29144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.442458   29144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.442524   29144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 14:51:17.448406   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 14:51:17.459192   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 14:51:17.470709   29144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.475219   29144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.475261   29144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 14:51:17.480867   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 14:51:17.490595   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 14:51:17.501614   29144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.506068   29144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.506110   29144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 14:51:17.511698   29144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 14:51:17.521977   29144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 14:51:17.526469   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 14:51:17.532280   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 14:51:17.537838   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 14:51:17.543443   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 14:51:17.548942   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 14:51:17.554944   29144 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 14:51:17.560732   29144 kubeadm.go:392] StartCluster: {Name:ha-999305 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-999305 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.163 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.250 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.225 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:51:17.560844   29144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 14:51:17.560879   29144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 14:51:17.596964   29144 cri.go:89] found id: "ace5d8da1e7011556e4585b160750ff561f4af73291bb21315e2b342283bdd93"
	I0719 14:51:17.596987   29144 cri.go:89] found id: "b61c51a97f4faba8350ec052b78fd4b55bf293186fbf7143cd98e00900cec56d"
	I0719 14:51:17.596993   29144 cri.go:89] found id: "5f8e15e50f632211d067214d031357c0f3c9c63aa5eca7feda3a937c498ab8f2"
	I0719 14:51:17.596998   29144 cri.go:89] found id: "40a0d71907cfc4362041b0e195a73d22bae97dc344275aaa2da78693faa9d053"
	I0719 14:51:17.597002   29144 cri.go:89] found id: "8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59"
	I0719 14:51:17.597006   29144 cri.go:89] found id: "60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75"
	I0719 14:51:17.597010   29144 cri.go:89] found id: "f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d"
	I0719 14:51:17.597014   29144 cri.go:89] found id: "3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859"
	I0719 14:51:17.597018   29144 cri.go:89] found id: "f81aa97ac4ed43dbcf51f9ca389f2c8fe519ebcc2e41afbd3c10a35fc186301e"
	I0719 14:51:17.597025   29144 cri.go:89] found id: "4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50"
	I0719 14:51:17.597029   29144 cri.go:89] found id: "85e5d02964a276c6828ce4ab956ff0f7be7faf73c33e6db54498a2af80ae8abf"
	I0719 14:51:17.597049   29144 cri.go:89] found id: "eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a"
	I0719 14:51:17.597056   29144 cri.go:89] found id: "21f9837a6d159e2808194c8f6cdfe2ef6538a257fd6fd224bbb5c301da68b723"
	I0719 14:51:17.597060   29144 cri.go:89] found id: ""
	I0719 14:51:17.597107   29144 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.744679750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400994744655450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa8072a5-3c4a-4a5b-a637-1c60b8ecb693 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.746162665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfa8de56-b78e-4ee4-96e4-eb16d12a12fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.746251839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfa8de56-b78e-4ee4-96e4-eb16d12a12fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.746672569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfa8de56-b78e-4ee4-96e4-eb16d12a12fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.798657791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bfc2cad-5b7c-4f90-ba63-114dbad9fea1 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.798762495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bfc2cad-5b7c-4f90-ba63-114dbad9fea1 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.799858181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=822acc56-9c1a-425e-9c06-4f426b4bc192 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.800532062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400994800460766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=822acc56-9c1a-425e-9c06-4f426b4bc192 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.801044533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f8ac74b-630f-4dd9-b439-9a1fd7d15ef2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.801119723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f8ac74b-630f-4dd9-b439-9a1fd7d15ef2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.801506176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f8ac74b-630f-4dd9-b439-9a1fd7d15ef2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.845605580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d7ee25d-90ef-4dff-9627-969a13d96495 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.845693097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d7ee25d-90ef-4dff-9627-969a13d96495 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.846986464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a8cf02e7-c95e-4d3b-8bfd-3fd28701a8fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.847518686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400994847493093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8cf02e7-c95e-4d3b-8bfd-3fd28701a8fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.848278780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af2b3d66-ad53-4961-85ca-1a02b0379442 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.848348978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af2b3d66-ad53-4961-85ca-1a02b0379442 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.848939797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af2b3d66-ad53-4961-85ca-1a02b0379442 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.893719883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e14c6ea-f8c0-4823-9ea9-6203d2cb1d32 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.893808924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e14c6ea-f8c0-4823-9ea9-6203d2cb1d32 name=/runtime.v1.RuntimeService/Version
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.894943309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b361b977-50a8-4909-89f4-1004ec59a7de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.895374164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721400994895352650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b361b977-50a8-4909-89f4-1004ec59a7de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.896147295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2a47e2a-b72b-46f7-a819-75bd58637002 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.896216566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2a47e2a-b72b-46f7-a819-75bd58637002 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 14:56:34 ha-999305 crio[3814]: time="2024-07-19 14:56:34.896634890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5d2dbf7d618538f31375cd87bdede99fd3533d370163af2627537cc171e61f95,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721400779359371916,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721400723357005248,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Annotations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721400721353563830,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f750b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7feb50f1802cc518ce0bfba149819196c4c67293c490a6c0183b6af3b122e17d,PodSandboxId:d11211ae99c55f8fd9d980b70173e76cf051bc6830ea3aecf62ef69a8fdd6dec,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721400715868859613,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annotations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c3a36939e35f0d6961f51c40b73656efe23a591fd59af55b0cce8dc8b52d23e,PodSandboxId:d2ea66b603ce53285979c63b4b9daef0cb78c3e43ef894d7379204ea55bddc54,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721400696481579467,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a7dfba96665bd8b5110250981ccbb6a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc,PodSandboxId:903035f21620ca3a0649fa30d3acd8cf6c661de995ca1750116e006c35371716,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721400682642224837,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc00743-8980-495b-9a44-c3d3d42829f6,},Annotations:map[string]string{io.kubernetes.container.hash: 297cd4bd,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2,PodSandboxId:aa808b8f714e48f9cd6ef0e921f133fc9f0d69803e4a9414621632ac119c9476,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721400682658235653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e,PodSandboxId:07b84953816d30339e8cc73882f94a2c2a11cf53be38904d05a0075e0b32fd40,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682976266564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kubernetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b,PodSandboxId:d7f9fb434d5189c9bdd7c5630e51802f5e62cbf5e7440af5a809d0ded2ed8b13,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721400682746358289,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43,PodSandboxId:2f475acbf17eee523289fa8a6418e11e7dd7e478489a96c0f3f6cb3a55c740cd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721400682711719561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3,PodSandboxId:6106ded0c51a3a0956c499038ec71124e4601e1402d49fbc489f0ea5071e19b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721400682559996665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763,PodSandboxId:768143bcd18941779d9353c77dff643063e7fbd3924b742b780d64aeefa6215e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721400682476537846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1
e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b,PodSandboxId:598787c3a3b5c0e4e7d435734cbafe33217d15107703c90261d6784aa296790e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721400682453647322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dc610418d0256f7
50b6fcb062df4e70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073,PodSandboxId:9f9bfe783394771da3a918625ff94e1f262f6d16dfc3392bb0c91d42ed8fe77a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721400682318099782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f97b8931ee147a8b6b7be70edef5c8c,},Ann
otations:map[string]string{io.kubernetes.container.hash: 1eec5b3f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401082f94c28820c3d700ddd879958d1f6b1c19d7103ac2bb8df53a6c385a43,PodSandboxId:f0b7b801c04fe2ef20592dab8aa42d3c8cf1687890b713382f19906f3549b514,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721400183757593750,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-2rfw6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 25cd3990-0ad4-44e2-895c-4e8c81e621af,},Annot
ations:map[string]string{io.kubernetes.container.hash: f65b58f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75,PodSandboxId:1eb500abeaf599e8cb49e9da77773469ed80d852b2fa7d7b1e4dbe5e9601aa06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970872684834,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9sxgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f394b2d0-345c-4f2c-9c30-4c7c8c13361b,},Annotations:map[string]string{io.kube
rnetes.container.hash: 869a458a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59,PodSandboxId:35affd85abc522da7e710ed9f5245c0fd223cee25dd7035c30f0bb7edec0a143,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721399970877758331,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gtwxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ccad831-1940-4a7c-bea7-a73b07f9d3a2,},Annotations:map[string]string{io.kubernetes.container.hash: ea3843fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d,PodSandboxId:b21ce83a41d26cbec4c6ae531d60e93698ac48d0cd772ae0f9e21838302b46dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721399958717372581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tpffr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6847e94-cf07-4fa7-9729-dca36c54672e,},Annotations:map[string]string{io.kubernetes.container.hash: c626c221,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859,PodSandboxId:0bc58fc40b11b8e528c518d994f61ba43b649d8efb765758b1d6fd14ac8fedd7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721399958388711079,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2wb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f96f5ff-96c6-460c-b8da-23d5dda42745,},Annotations:map[string]string{io.kubernetes.container.hash: 3e474b15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50,PodSandboxId:4fe960d43fbe438f6c37a69e5866a3dc65f157ef92c22c2fcbeea735a817f0f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1721399938927055827,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7c6c44e50a74c1ab1df915e3708a4b0,},Annotations:map[string]string{io.kubernetes.container.hash: 3b8c9e8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a,PodSandboxId:01e1ea6c3d6e90880366e44c5129ee9e6f30c94b19bbd1bdceab9b0cc3ab0bdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedA
t:1721399938875055751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-999305,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 225afe64001307a6e59a1e30b782f3b5,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2a47e2a-b72b-46f7-a819-75bd58637002 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5d2dbf7d61853       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   903035f21620c       storage-provisioner
	29a4d735b7ed0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   9f9bfe7833947       kube-apiserver-ha-999305
	dfafedeba739f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   598787c3a3b5c       kube-controller-manager-ha-999305
	7feb50f1802cc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   d11211ae99c55       busybox-fc5497c4f-2rfw6
	7c3a36939e35f       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   d2ea66b603ce5       kube-vip-ha-999305
	61f7ef08f69aa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   07b84953816d3       coredns-7db6d8ff4d-9sxgr
	2c7086de4c9cf       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      5 minutes ago       Running             kindnet-cni               1                   d7f9fb434d518       kindnet-tpffr
	4abf3705cf91b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   2f475acbf17ee       etcd-ha-999305
	a7e947cd85904       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   aa808b8f714e4       kube-proxy-s2wb7
	3b4f082bbec57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   903035f21620c       storage-provisioner
	95527d65a8d52       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   6106ded0c51a3       coredns-7db6d8ff4d-gtwxd
	6af80a77cbda0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   768143bcd1894       kube-scheduler-ha-999305
	fa3370ea85a8d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   598787c3a3b5c       kube-controller-manager-ha-999305
	1d0019ea14b1f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   9f9bfe7833947       kube-apiserver-ha-999305
	d401082f94c28       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   f0b7b801c04fe       busybox-fc5497c4f-2rfw6
	8a1cd64a0c897       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   35affd85abc52       coredns-7db6d8ff4d-gtwxd
	60ddffbf7c51f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   1eb500abeaf59       coredns-7db6d8ff4d-9sxgr
	f411cdcc4b000       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      17 minutes ago      Exited              kindnet-cni               0                   b21ce83a41d26       kindnet-tpffr
	3df47e2e7e71d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      17 minutes ago      Exited              kube-proxy                0                   0bc58fc40b11b       kube-proxy-s2wb7
	4106d6aa51360       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   4fe960d43fbe4       etcd-ha-999305
	eea532e07ff56       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   01e1ea6c3d6e9       kube-scheduler-ha-999305
	
	
	==> coredns [60ddffbf7c51f1746aa8395300c7e0e70501f7ec7deaa0825c9596050ffa6b75] <==
	[INFO] 10.244.0.4:37231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199786s
	[INFO] 10.244.0.4:46408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015144s
	[INFO] 10.244.2.2:44298 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253661s
	[INFO] 10.244.2.2:46320 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000124288s
	[INFO] 10.244.2.2:55428 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001507596s
	[INFO] 10.244.2.2:49678 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072967s
	[INFO] 10.244.1.2:50895 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001783712s
	[INFO] 10.244.1.2:40165 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093772s
	[INFO] 10.244.1.2:53172 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001252641s
	[INFO] 10.244.1.2:34815 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000105356s
	[INFO] 10.244.1.2:37850 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000213269s
	[INFO] 10.244.2.2:37470 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132796s
	[INFO] 10.244.1.2:53739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116332s
	[INFO] 10.244.1.2:49785 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150432s
	[INFO] 10.244.1.2:39191 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095042s
	[INFO] 10.244.0.4:54115 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000158247s
	[INFO] 10.244.2.2:54824 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010194s
	[INFO] 10.244.2.2:53937 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137939s
	[INFO] 10.244.2.2:32859 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135977s
	[INFO] 10.244.1.2:38346 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011678s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	
	
	==> coredns [61f7ef08f69aaeb72d381df21b77a24c36941b8de4e66a5a78351e5b64ceb07e] <==
	Trace[1998108003]: [10.387021259s] [10.387021259s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:33670->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33680->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1105691522]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 14:51:34.688) (total time: 10133ms):
	Trace[1105691522]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33680->10.96.0.1:443: read: connection reset by peer 10133ms (14:51:44.822)
	Trace[1105691522]: [10.133947768s] [10.133947768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:33680->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8a1cd64a0c897e1f6efb6cef4d63898611463ee1ea2b810d672f76d74b428e59] <==
	[INFO] 10.244.0.4:53550 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131429s
	[INFO] 10.244.2.2:43045 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176233s
	[INFO] 10.244.2.2:58868 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001941494s
	[INFO] 10.244.2.2:46158 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115413s
	[INFO] 10.244.2.2:48082 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182529s
	[INFO] 10.244.1.2:43898 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136537s
	[INFO] 10.244.1.2:41884 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111392s
	[INFO] 10.244.1.2:37393 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070881s
	[INFO] 10.244.0.4:38875 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088591s
	[INFO] 10.244.0.4:39118 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123769s
	[INFO] 10.244.0.4:52630 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000045788s
	[INFO] 10.244.0.4:40500 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041439s
	[INFO] 10.244.2.2:60125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195649s
	[INFO] 10.244.2.2:60453 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126438s
	[INFO] 10.244.2.2:49851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00022498s
	[INFO] 10.244.1.2:57692 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010212s
	[INFO] 10.244.0.4:59894 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230322s
	[INFO] 10.244.0.4:42506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177637s
	[INFO] 10.244.0.4:53162 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099069s
	[INFO] 10.244.2.2:44371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126437s
	[INFO] 10.244.1.2:47590 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107441s
	[INFO] 10.244.1.2:44734 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130206s
	[INFO] 10.244.1.2:33311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075949s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [95527d65a8d52e54e8262ad6455de4478f2fc6bebe3596fdd77032926396b3d3] <==
	Trace[1082151208]: [10.00115321s] [10.00115321s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:32878->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:32878->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:32864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1762988053]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 14:51:34.511) (total time: 10311ms):
	Trace[1762988053]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:32864->10.96.0.1:443: read: connection reset by peer 10311ms (14:51:44.822)
	Trace[1762988053]: [10.311303354s] [10.311303354s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:32864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-999305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T14_39_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:56:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:52:07 +0000   Fri, 19 Jul 2024 14:39:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-999305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1230c1bed065421db8c3e4d5f899877a
	  System UUID:                1230c1be-d065-421d-b8c3-e4d5f899877a
	  Boot ID:                    7e7082ac-a784-4d5a-9539-9692157a7b3e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2rfw6              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-9sxgr             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-gtwxd             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-999305                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-tpffr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-999305             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-999305    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-s2wb7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-999305             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-999305                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m27s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-999305 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-999305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-999305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-999305 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Warning  ContainerGCFailed        5m30s (x2 over 6m30s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-999305 event: Registered Node ha-999305 in Controller
	
	
	Name:               ha-999305-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_41_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:41:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:56:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:52:48 +0000   Fri, 19 Jul 2024 14:52:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.163
	  Hostname:    ha-999305-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27a97bc8637c4fba94a7bb397a84b598
	  System UUID:                27a97bc8-637c-4fba-94a7-bb397a84b598
	  Boot ID:                    976ac5cc-cf36-40bb-b39c-6e0ab51c2d42
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pcfwd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-999305-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-hsb9f                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-999305-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-999305-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-766sx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-999305-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-999305-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-999305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-999305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-999305-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-999305-m02 status is now: NodeNotReady
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-999305-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-999305-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-999305-m02 event: Registered Node ha-999305-m02 in Controller
	
	
	Name:               ha-999305-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-999305-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=ha-999305
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T14_43_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:43:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-999305-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:54:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:54:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:54:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:54:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 14:53:47 +0000   Fri, 19 Jul 2024 14:54:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-999305-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 74d4c450135c44d386a1cb39310dd813
	  System UUID:                74d4c450-135c-44d3-86a1-cb39310dd813
	  Boot ID:                    e27ebc77-8f7e-410d-8686-f4482a9e2888
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-px8jd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-j9gzv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-qqtph           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-999305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-999305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-999305-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-999305-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-999305-m04 event: Registered Node ha-999305-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-999305-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-999305-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-999305-m04 has been rebooted, boot id: e27ebc77-8f7e-410d-8686-f4482a9e2888
	  Normal   NodeReady                2m48s                  kubelet          Node ha-999305-m04 status is now: NodeReady
	  Normal   NodeNotReady             103s (x2 over 3m43s)   node-controller  Node ha-999305-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.149646] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.056448] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062757] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.176758] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.118673] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.280022] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.245148] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.893793] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +0.060163] kauditd_printk_skb: 158 callbacks suppressed
	[Jul19 14:39] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.183971] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +6.719863] kauditd_printk_skb: 23 callbacks suppressed
	[ +19.024750] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 14:41] kauditd_printk_skb: 26 callbacks suppressed
	[Jul19 14:48] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 14:51] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[  +0.154974] systemd-fstab-generator[3745]: Ignoring "noauto" option for root device
	[  +0.190442] systemd-fstab-generator[3759]: Ignoring "noauto" option for root device
	[  +0.162552] systemd-fstab-generator[3772]: Ignoring "noauto" option for root device
	[  +0.287386] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +3.521466] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	[  +5.118809] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.341591] kauditd_printk_skb: 85 callbacks suppressed
	[Jul19 14:52] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [4106d6aa51360f5b465ed388b40f5012fb6d82b9c1a1b11a59a9b5a0f35b2f50] <==
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T14:49:41.190071Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:49:40.5802Z","time spent":"609.864297ms","remote":"127.0.0.1:39214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T14:49:41.190085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:49:33.328829Z","time spent":"7.861252178s","remote":"127.0.0.1:39056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T14:49:41.189865Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T14:49:40.571681Z","time spent":"618.179005ms","remote":"127.0.0.1:39450","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	2024/07/19 14:49:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-19T14:49:41.232513Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-19T14:49:41.232688Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.232732Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.232778Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.232939Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233098Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233132Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"af1cb735ec0c662e"}
	{"level":"info","ts":"2024-07-19T14:49:41.233141Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233156Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233174Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233266Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233338Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233418Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.233462Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:49:41.236958Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2024-07-19T14:49:41.237121Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2024-07-19T14:49:41.237165Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-999305","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	
	
	==> etcd [4abf3705cf91bfe1ec832694ddfd292457d7d0d40ced5587643973788e45ce43] <==
	{"level":"info","ts":"2024-07-19T14:53:05.856326Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.856427Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.873133Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"cb8c47d19ac5c821","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T14:53:05.873307Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:53:05.879367Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"cb8c47d19ac5c821","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T14:53:05.879502Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:54:01.201306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 switched to configuration voters=(2080375272429567737 12618161698206672430)"}
	{"level":"info","ts":"2024-07-19T14:54:01.206357Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","removed-remote-peer-id":"cb8c47d19ac5c821","removed-remote-peer-urls":["https://192.168.39.250:2380"]}
	{"level":"info","ts":"2024-07-19T14:54:01.20655Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.207211Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.20661Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"1cdefa49b8abbef9","removed-member-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.207602Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-07-19T14:54:01.207495Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.208687Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:54:01.208855Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:54:01.214137Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.21459Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821","error":"context canceled"}
	{"level":"warn","ts":"2024-07-19T14:54:01.214677Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"cb8c47d19ac5c821","error":"failed to read cb8c47d19ac5c821 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-19T14:54:01.214734Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.215229Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821","error":"context canceled"}
	{"level":"info","ts":"2024-07-19T14:54:01.215338Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:54:01.215408Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"info","ts":"2024-07-19T14:54:01.215535Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"1cdefa49b8abbef9","removed-remote-peer-id":"cb8c47d19ac5c821"}
	{"level":"warn","ts":"2024-07-19T14:54:01.225379Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.250:53374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-19T14:54:01.227535Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.250:53384","server-name":"","error":"read tcp 192.168.39.240:2380->192.168.39.250:53384: read: connection reset by peer"}
	
	
	==> kernel <==
	 14:56:35 up 18 min,  0 users,  load average: 0.25, 0.49, 0.36
	Linux ha-999305 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c7086de4c9cf6aa3dac1bcb0da59df115057e4f2b6c0e3e0a54e0c7cde6e23b] <==
	I0719 14:55:53.903965       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:56:03.909990       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:56:03.910142       1 main.go:303] handling current node
	I0719 14:56:03.910176       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:56:03.910214       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:56:03.910353       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:56:03.910374       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:56:13.910016       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:56:13.910059       1 main.go:303] handling current node
	I0719 14:56:13.910087       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:56:13.910092       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:56:13.910265       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:56:13.910289       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:56:23.901816       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:56:23.902101       1 main.go:303] handling current node
	I0719 14:56:23.902137       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:56:23.902161       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:56:23.902310       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:56:23.902331       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:56:33.910582       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:56:33.910721       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:56:33.910978       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:56:33.911017       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:56:33.911099       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:56:33.911120       1 main.go:303] handling current node
	
	
	==> kindnet [f411cdcc4b000ff3cb14f78ea3c31dc269db60bb4857a57e3e040ef551f2e56d] <==
	I0719 14:49:09.901111       1 main.go:303] handling current node
	I0719 14:49:19.892004       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:49:19.892072       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:49:19.892960       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:49:19.893020       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:49:19.893359       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:49:19.893399       1 main.go:303] handling current node
	I0719 14:49:19.893429       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:49:19.893439       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:49:29.893088       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:49:29.893197       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:49:29.893364       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:49:29.893398       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	I0719 14:49:29.893472       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:49:29.893481       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:49:29.893546       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:49:29.893574       1 main.go:303] handling current node
	I0719 14:49:39.897017       1 main.go:299] Handling node with IPs: map[192.168.39.225:{}]
	I0719 14:49:39.897065       1 main.go:326] Node ha-999305-m04 has CIDR [10.244.3.0/24] 
	I0719 14:49:39.897303       1 main.go:299] Handling node with IPs: map[192.168.39.240:{}]
	I0719 14:49:39.897329       1 main.go:303] handling current node
	I0719 14:49:39.897345       1 main.go:299] Handling node with IPs: map[192.168.39.163:{}]
	I0719 14:49:39.897350       1 main.go:326] Node ha-999305-m02 has CIDR [10.244.1.0/24] 
	I0719 14:49:39.897395       1 main.go:299] Handling node with IPs: map[192.168.39.250:{}]
	I0719 14:49:39.897414       1 main.go:326] Node ha-999305-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1d0019ea14b1f1f0eeaab068ca83bd6c972f27c29b3914b77bf4938eaf930073] <==
	I0719 14:51:22.812148       1 options.go:221] external host was not specified, using 192.168.39.240
	I0719 14:51:22.834194       1 server.go:148] Version: v1.30.3
	I0719 14:51:22.835586       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:51:23.685363       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0719 14:51:23.688960       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 14:51:23.697592       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0719 14:51:23.697627       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0719 14:51:23.697814       1 instance.go:299] Using reconciler: lease
	W0719 14:51:43.678753       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0719 14:51:43.685420       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0719 14:51:43.701275       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [29a4d735b7ed0983736fca65a1ef4bfda99c301935bf5f2fda781e5b41a2b8a4] <==
	I0719 14:52:05.255105       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0719 14:52:05.255142       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0719 14:52:05.256988       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0719 14:52:05.348756       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 14:52:05.352011       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 14:52:05.352264       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 14:52:05.380320       1 aggregator.go:165] initial CRD sync complete...
	I0719 14:52:05.380374       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 14:52:05.380381       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 14:52:05.380387       1 cache.go:39] Caches are synced for autoregister controller
	I0719 14:52:05.385954       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 14:52:05.399685       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 14:52:05.399764       1 policy_source.go:224] refreshing policies
	I0719 14:52:05.399950       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 14:52:05.441030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 14:52:05.441691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 14:52:05.442979       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 14:52:05.443063       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	W0719 14:52:05.461222       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.163 192.168.39.250]
	I0719 14:52:05.463169       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 14:52:05.482624       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 14:52:05.486091       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0719 14:52:05.492389       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0719 14:52:06.247621       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 14:52:06.644513       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.163 192.168.39.240 192.168.39.250]
	
	
	==> kube-controller-manager [dfafedeba739fe9732d241e86f99fa2b378bbb40489962cd897f829e59fe86d1] <==
	E0719 14:54:37.434166       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:37.434195       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:37.434219       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:37.434245       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	I0719 14:54:52.482325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.061887ms"
	I0719 14:54:52.487086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.727µs"
	E0719 14:54:57.434696       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:57.434772       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:57.434794       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:57.434803       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	E0719 14:54:57.434816       1 gc_controller.go:153] "Failed to get node" err="node \"ha-999305-m03\" not found" logger="pod-garbage-collector-controller" node="ha-999305-m03"
	I0719 14:54:57.450232       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-999305-m03"
	I0719 14:54:57.482226       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-999305-m03"
	I0719 14:54:57.482400       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-999305-m03"
	I0719 14:54:57.521713       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-999305-m03"
	I0719 14:54:57.521765       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-999305-m03"
	I0719 14:54:57.558782       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-999305-m03"
	I0719 14:54:57.558819       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-999305-m03"
	I0719 14:54:57.663477       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-999305-m03"
	I0719 14:54:57.663553       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-b7lvb"
	I0719 14:54:57.696110       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-b7lvb"
	I0719 14:54:57.696149       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-twh47"
	I0719 14:54:57.722833       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-twh47"
	I0719 14:54:57.723121       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-999305-m03"
	I0719 14:54:57.751438       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-999305-m03"
	
	
	==> kube-controller-manager [fa3370ea85a8d996beb1ad3822ec97cf4a3a980a895b1e8d8fc07bd243774a0b] <==
	I0719 14:51:23.691624       1 serving.go:380] Generated self-signed cert in-memory
	I0719 14:51:24.107910       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0719 14:51:24.107954       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:51:24.109832       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 14:51:24.110078       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 14:51:24.110294       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 14:51:24.110493       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0719 14:51:44.708367       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.240:8443/healthz\": dial tcp 192.168.39.240:8443: connect: connection refused"
	
	
	==> kube-proxy [3df47e2e7e71d00c94f4b970182a3e9717da31d663db7ad6d1b911660b9f7859] <==
	E0719 14:48:38.200538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:41.271540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:41.271615       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:41.271701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:41.271738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:44.343206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:44.343270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:47.416064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:47.416190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:50.486739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:50.488231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:50.488125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:50.488365       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:48:56.633323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:48:56.633714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:02.775358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:02.775540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:05.846774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:05.847064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:18.135499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:18.135559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-999305&resourceVersion=2061": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:21.206810       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:21.207061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2011": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 14:49:27.351355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 14:49:27.351697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1957": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [a7e947cd85904665ca2f28162d13591b7b3c152f7d838575ba52ad13506260b2] <==
	I0719 14:51:24.180579       1 server_linux.go:69] "Using iptables proxy"
	E0719 14:51:27.158434       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:30.231645       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:33.303284       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:39.446300       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 14:51:48.662776       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-999305\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0719 14:52:07.832146       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	I0719 14:52:07.869478       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 14:52:07.869528       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 14:52:07.869544       1 server_linux.go:165] "Using iptables Proxier"
	I0719 14:52:07.872272       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 14:52:07.872503       1 server.go:872] "Version info" version="v1.30.3"
	I0719 14:52:07.872723       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:52:07.874408       1 config.go:192] "Starting service config controller"
	I0719 14:52:07.874506       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 14:52:07.874572       1 config.go:101] "Starting endpoint slice config controller"
	I0719 14:52:07.874592       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 14:52:07.875305       1 config.go:319] "Starting node config controller"
	I0719 14:52:07.875343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 14:52:07.975983       1 shared_informer.go:320] Caches are synced for service config
	I0719 14:52:07.976507       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 14:52:07.976568       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6af80a77cbda0dbd341b767c66464e70785b7107464fb68291562ffd5bf41763] <==
	W0719 14:52:00.750529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:00.750608       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:01.470603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.240:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:01.470653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.240:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:01.983475       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.240:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:01.983574       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.240:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:02.234433       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:02.234509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:02.395474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0719 14:52:02.395548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	W0719 14:52:05.272626       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:52:05.274985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:52:05.275499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 14:52:05.275606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 14:52:05.275741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 14:52:05.275815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 14:52:05.276038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 14:52:05.276130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 14:52:05.276269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:52:05.276353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0719 14:52:15.113703       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 14:53:57.897785       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-px8jd\": pod busybox-fc5497c4f-px8jd is already assigned to node \"ha-999305-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-px8jd" node="ha-999305-m04"
	E0719 14:53:57.900246       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9525a555-905d-421a-990d-59bb44d3e060(default/busybox-fc5497c4f-px8jd) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-px8jd"
	E0719 14:53:57.900367       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-px8jd\": pod busybox-fc5497c4f-px8jd is already assigned to node \"ha-999305-m04\"" pod="default/busybox-fc5497c4f-px8jd"
	I0719 14:53:57.900459       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-px8jd" node="ha-999305-m04"
	
	
	==> kube-scheduler [eea532e07ff56bc395aa4cf137a9b87ed35eaa809769a2471978f8cec17de70a] <==
	W0719 14:49:37.108746       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:49:37.108833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:49:37.124702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:49:37.124811       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 14:49:37.138068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 14:49:37.138151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 14:49:37.178638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 14:49:37.178751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 14:49:37.341183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:49:37.341285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:49:37.592984       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 14:49:37.593094       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 14:49:40.023498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 14:49:40.023601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 14:49:40.481803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 14:49:40.481836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 14:49:40.842009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:49:40.842112       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 14:49:40.862343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 14:49:40.862375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 14:49:40.898799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 14:49:40.898971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 14:49:41.067927       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 14:49:41.068007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 14:49:41.144659       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 14:52:45 ha-999305 kubelet[1369]: I0719 14:52:45.345814    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:45 ha-999305 kubelet[1369]: E0719 14:52:45.346256    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5dc00743-8980-495b-9a44-c3d3d42829f6)\"" pod="kube-system/storage-provisioner" podUID="5dc00743-8980-495b-9a44-c3d3d42829f6"
	Jul 19 14:52:45 ha-999305 kubelet[1369]: I0719 14:52:45.376653    1369 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-999305"
	Jul 19 14:52:59 ha-999305 kubelet[1369]: I0719 14:52:59.342385    1369 scope.go:117] "RemoveContainer" containerID="3b4f082bbec576b730dd729029716b2cb79b139bd41bec99f37475911bb19abc"
	Jul 19 14:52:59 ha-999305 kubelet[1369]: I0719 14:52:59.620472    1369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-999305" podStartSLOduration=14.620448461 podStartE2EDuration="14.620448461s" podCreationTimestamp="2024-07-19 14:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 14:52:55.364179008 +0000 UTC m=+830.207173048" watchObservedRunningTime="2024-07-19 14:52:59.620448461 +0000 UTC m=+834.463442499"
	Jul 19 14:53:05 ha-999305 kubelet[1369]: E0719 14:53:05.411572    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:53:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:53:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:53:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:53:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:54:05 ha-999305 kubelet[1369]: E0719 14:54:05.411095    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:54:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:54:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:54:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:54:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:55:05 ha-999305 kubelet[1369]: E0719 14:55:05.413286    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:55:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:55:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:55:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:55:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 14:56:05 ha-999305 kubelet[1369]: E0719 14:56:05.411371    1369 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 14:56:05 ha-999305 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 14:56:05 ha-999305 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 14:56:05 ha-999305 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 14:56:05 ha-999305 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 14:56:34.448194   31509 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-3847/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-999305 -n ha-999305
helpers_test.go:261: (dbg) Run:  kubectl --context ha-999305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-121443
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-121443
E0719 15:12:29.032832   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-121443: exit status 82 (2m1.83945379s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-121443-m03"  ...
	* Stopping node "multinode-121443-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-121443" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-121443 --wait=true -v=8 --alsologtostderr
E0719 15:14:28.744107   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 15:15:32.080010   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-121443 --wait=true -v=8 --alsologtostderr: (3m22.02446977s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-121443
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-121443 -n multinode-121443
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-121443 logs -n 25: (1.554306253s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4276887194/001/cp-test_multinode-121443-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443:/home/docker/cp-test_multinode-121443-m02_multinode-121443.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443 sudo cat                                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m02_multinode-121443.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03:/home/docker/cp-test_multinode-121443-m02_multinode-121443-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443-m03 sudo cat                                   | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m02_multinode-121443-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp testdata/cp-test.txt                                                | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4276887194/001/cp-test_multinode-121443-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443:/home/docker/cp-test_multinode-121443-m03_multinode-121443.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443 sudo cat                                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m03_multinode-121443.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02:/home/docker/cp-test_multinode-121443-m03_multinode-121443-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443-m02 sudo cat                                   | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m03_multinode-121443-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-121443 node stop m03                                                          | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	| node    | multinode-121443 node start                                                             | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-121443                                                                | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:11 UTC |                     |
	| stop    | -p multinode-121443                                                                     | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:11 UTC |                     |
	| start   | -p multinode-121443                                                                     | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:13 UTC | 19 Jul 24 15:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-121443                                                                | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:13:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:13:29.170840   40893 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:13:29.170944   40893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:13:29.170952   40893 out.go:304] Setting ErrFile to fd 2...
	I0719 15:13:29.170957   40893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:13:29.171119   40893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:13:29.171608   40893 out.go:298] Setting JSON to false
	I0719 15:13:29.172482   40893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3355,"bootTime":1721398654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:13:29.172537   40893 start.go:139] virtualization: kvm guest
	I0719 15:13:29.174813   40893 out.go:177] * [multinode-121443] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:13:29.176156   40893 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:13:29.176156   40893 notify.go:220] Checking for updates...
	I0719 15:13:29.177561   40893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:13:29.178826   40893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:13:29.180107   40893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:13:29.181556   40893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:13:29.182857   40893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:13:29.184445   40893 config.go:182] Loaded profile config "multinode-121443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:13:29.184530   40893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:13:29.184935   40893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:13:29.184987   40893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:13:29.199530   40893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
	I0719 15:13:29.199924   40893 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:13:29.200468   40893 main.go:141] libmachine: Using API Version  1
	I0719 15:13:29.200487   40893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:13:29.200797   40893 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:13:29.200981   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:13:29.233602   40893 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:13:29.234815   40893 start.go:297] selected driver: kvm2
	I0719 15:13:29.234827   40893 start.go:901] validating driver "kvm2" against &{Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:13:29.234994   40893 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:13:29.235381   40893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:13:29.235455   40893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:13:29.249695   40893 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:13:29.250634   40893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:13:29.250711   40893 cni.go:84] Creating CNI manager for ""
	I0719 15:13:29.250725   40893 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 15:13:29.250797   40893 start.go:340] cluster config:
	{Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-121443 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:13:29.250982   40893 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:13:29.252652   40893 out.go:177] * Starting "multinode-121443" primary control-plane node in "multinode-121443" cluster
	I0719 15:13:29.253787   40893 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:13:29.253816   40893 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:13:29.253824   40893 cache.go:56] Caching tarball of preloaded images
	I0719 15:13:29.253891   40893 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:13:29.253900   40893 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:13:29.254007   40893 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/config.json ...
	I0719 15:13:29.254179   40893 start.go:360] acquireMachinesLock for multinode-121443: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:13:29.254214   40893 start.go:364] duration metric: took 19.692µs to acquireMachinesLock for "multinode-121443"
	I0719 15:13:29.254226   40893 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:13:29.254230   40893 fix.go:54] fixHost starting: 
	I0719 15:13:29.254505   40893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:13:29.254535   40893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:13:29.267847   40893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0719 15:13:29.268320   40893 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:13:29.268762   40893 main.go:141] libmachine: Using API Version  1
	I0719 15:13:29.268785   40893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:13:29.269077   40893 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:13:29.269245   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:13:29.269380   40893 main.go:141] libmachine: (multinode-121443) Calling .GetState
	I0719 15:13:29.270923   40893 fix.go:112] recreateIfNeeded on multinode-121443: state=Running err=<nil>
	W0719 15:13:29.270939   40893 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:13:29.273568   40893 out.go:177] * Updating the running kvm2 "multinode-121443" VM ...
	I0719 15:13:29.274927   40893 machine.go:94] provisionDockerMachine start ...
	I0719 15:13:29.274945   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:13:29.275107   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.277439   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.277929   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.277950   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.278125   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.278288   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.278421   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.278554   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.278720   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.278892   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.278901   40893 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:13:29.399808   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-121443
	
	I0719 15:13:29.399835   40893 main.go:141] libmachine: (multinode-121443) Calling .GetMachineName
	I0719 15:13:29.400065   40893 buildroot.go:166] provisioning hostname "multinode-121443"
	I0719 15:13:29.400091   40893 main.go:141] libmachine: (multinode-121443) Calling .GetMachineName
	I0719 15:13:29.400245   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.402935   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.403248   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.403275   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.403418   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.403580   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.403721   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.403835   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.403978   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.404140   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.404151   40893 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-121443 && echo "multinode-121443" | sudo tee /etc/hostname
	I0719 15:13:29.542519   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-121443
	
	I0719 15:13:29.542544   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.544998   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.545334   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.545362   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.545480   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.545643   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.545812   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.545922   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.546086   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.546324   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.546344   40893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-121443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-121443/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-121443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:13:29.667833   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:13:29.667861   40893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:13:29.667898   40893 buildroot.go:174] setting up certificates
	I0719 15:13:29.667908   40893 provision.go:84] configureAuth start
	I0719 15:13:29.667927   40893 main.go:141] libmachine: (multinode-121443) Calling .GetMachineName
	I0719 15:13:29.668169   40893 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:13:29.670421   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.670738   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.670769   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.670933   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.673264   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.673599   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.673630   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.673758   40893 provision.go:143] copyHostCerts
	I0719 15:13:29.673794   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:13:29.673834   40893 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:13:29.673846   40893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:13:29.673915   40893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:13:29.674001   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:13:29.674019   40893 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:13:29.674025   40893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:13:29.674050   40893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:13:29.674107   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:13:29.674122   40893 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:13:29.674128   40893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:13:29.674148   40893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:13:29.674206   40893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.multinode-121443 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-121443]
	I0719 15:13:29.827902   40893 provision.go:177] copyRemoteCerts
	I0719 15:13:29.827952   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:13:29.827973   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.830681   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.831054   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.831087   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.831233   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.831396   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.831567   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.831683   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:13:29.917142   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 15:13:29.917199   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:13:29.942650   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 15:13:29.942730   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:13:29.967966   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 15:13:29.968026   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:13:29.991024   40893 provision.go:87] duration metric: took 323.103999ms to configureAuth
	I0719 15:13:29.991046   40893 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:13:29.991253   40893 config.go:182] Loaded profile config "multinode-121443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:13:29.991347   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.993785   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.994108   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.994133   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.994331   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.994514   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.994660   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.994790   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.995048   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.995257   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.995273   40893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:15:00.847984   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:15:00.848012   40893 machine.go:97] duration metric: took 1m31.573071971s to provisionDockerMachine
	I0719 15:15:00.848024   40893 start.go:293] postStartSetup for "multinode-121443" (driver="kvm2")
	I0719 15:15:00.848035   40893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:15:00.848051   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:00.848406   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:15:00.848431   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:00.851790   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.852267   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:00.852288   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.852506   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:00.852699   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:00.852824   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:00.852979   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:15:00.942220   40893 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:15:00.946418   40893 command_runner.go:130] > NAME=Buildroot
	I0719 15:15:00.946436   40893 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 15:15:00.946440   40893 command_runner.go:130] > ID=buildroot
	I0719 15:15:00.946445   40893 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 15:15:00.946449   40893 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 15:15:00.946482   40893 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:15:00.946496   40893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:15:00.946544   40893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:15:00.946609   40893 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:15:00.946620   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 15:15:00.946712   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:15:00.956222   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:15:00.980132   40893 start.go:296] duration metric: took 132.096007ms for postStartSetup
	I0719 15:15:00.980169   40893 fix.go:56] duration metric: took 1m31.725938844s for fixHost
	I0719 15:15:00.980188   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:00.982758   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.983064   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:00.983102   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.983354   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:00.983540   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:00.983698   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:00.983844   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:00.983993   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:15:00.984202   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:15:00.984253   40893 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:15:01.095372   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721402101.074141469
	
	I0719 15:15:01.095393   40893 fix.go:216] guest clock: 1721402101.074141469
	I0719 15:15:01.095402   40893 fix.go:229] Guest: 2024-07-19 15:15:01.074141469 +0000 UTC Remote: 2024-07-19 15:15:00.980173586 +0000 UTC m=+91.842218458 (delta=93.967883ms)
	I0719 15:15:01.095426   40893 fix.go:200] guest clock delta is within tolerance: 93.967883ms
	I0719 15:15:01.095432   40893 start.go:83] releasing machines lock for "multinode-121443", held for 1m31.841209887s
	I0719 15:15:01.095457   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.095740   40893 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:15:01.098130   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.098505   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:01.098540   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.098720   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.099321   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.099489   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.099577   40893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:15:01.099622   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:01.099723   40893 ssh_runner.go:195] Run: cat /version.json
	I0719 15:15:01.099747   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:01.102017   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.102439   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.102471   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:01.102519   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.102694   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:01.102841   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:01.102989   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:01.103044   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:01.103067   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.103108   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:15:01.103249   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:01.103398   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:01.103565   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:01.103690   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:15:01.183498   40893 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 15:15:01.183793   40893 ssh_runner.go:195] Run: systemctl --version
	I0719 15:15:01.212288   40893 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 15:15:01.212343   40893 command_runner.go:130] > systemd 252 (252)
	I0719 15:15:01.212364   40893 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 15:15:01.212437   40893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:15:01.388981   40893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 15:15:01.395406   40893 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 15:15:01.395468   40893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:15:01.395532   40893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:15:01.405977   40893 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 15:15:01.406001   40893 start.go:495] detecting cgroup driver to use...
	I0719 15:15:01.406072   40893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:15:01.423271   40893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:15:01.437880   40893 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:15:01.437934   40893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:15:01.453383   40893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:15:01.467872   40893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:15:01.626285   40893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:15:01.781158   40893 docker.go:233] disabling docker service ...
	I0719 15:15:01.781231   40893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:15:01.801679   40893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:15:01.817234   40893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:15:01.970187   40893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:15:02.124945   40893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:15:02.140269   40893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:15:02.158982   40893 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0719 15:15:02.159033   40893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:15:02.159090   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.170246   40893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:15:02.170326   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.181405   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.192141   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.202822   40893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:15:02.213538   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.224654   40893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.235598   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.246312   40893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:15:02.256183   40893 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 15:15:02.256275   40893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:15:02.265954   40893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:15:02.402781   40893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:15:02.744299   40893 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:15:02.744360   40893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:15:02.750746   40893 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0719 15:15:02.750774   40893 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 15:15:02.750782   40893 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0719 15:15:02.750791   40893 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 15:15:02.750797   40893 command_runner.go:130] > Access: 2024-07-19 15:15:02.639302039 +0000
	I0719 15:15:02.750806   40893 command_runner.go:130] > Modify: 2024-07-19 15:15:02.606301236 +0000
	I0719 15:15:02.750813   40893 command_runner.go:130] > Change: 2024-07-19 15:15:02.606301236 +0000
	I0719 15:15:02.750839   40893 command_runner.go:130] >  Birth: -
	I0719 15:15:02.750871   40893 start.go:563] Will wait 60s for crictl version
	I0719 15:15:02.750933   40893 ssh_runner.go:195] Run: which crictl
	I0719 15:15:02.762073   40893 command_runner.go:130] > /usr/bin/crictl
	I0719 15:15:02.762554   40893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:15:02.801429   40893 command_runner.go:130] > Version:  0.1.0
	I0719 15:15:02.801451   40893 command_runner.go:130] > RuntimeName:  cri-o
	I0719 15:15:02.801456   40893 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0719 15:15:02.801461   40893 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 15:15:02.802437   40893 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:15:02.802535   40893 ssh_runner.go:195] Run: crio --version
	I0719 15:15:02.832276   40893 command_runner.go:130] > crio version 1.29.1
	I0719 15:15:02.832307   40893 command_runner.go:130] > Version:        1.29.1
	I0719 15:15:02.832316   40893 command_runner.go:130] > GitCommit:      unknown
	I0719 15:15:02.832322   40893 command_runner.go:130] > GitCommitDate:  unknown
	I0719 15:15:02.832328   40893 command_runner.go:130] > GitTreeState:   clean
	I0719 15:15:02.832337   40893 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 15:15:02.832343   40893 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 15:15:02.832350   40893 command_runner.go:130] > Compiler:       gc
	I0719 15:15:02.832359   40893 command_runner.go:130] > Platform:       linux/amd64
	I0719 15:15:02.832366   40893 command_runner.go:130] > Linkmode:       dynamic
	I0719 15:15:02.832376   40893 command_runner.go:130] > BuildTags:      
	I0719 15:15:02.832390   40893 command_runner.go:130] >   containers_image_ostree_stub
	I0719 15:15:02.832399   40893 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 15:15:02.832406   40893 command_runner.go:130] >   btrfs_noversion
	I0719 15:15:02.832415   40893 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 15:15:02.832423   40893 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 15:15:02.832429   40893 command_runner.go:130] >   seccomp
	I0719 15:15:02.832435   40893 command_runner.go:130] > LDFlags:          unknown
	I0719 15:15:02.832443   40893 command_runner.go:130] > SeccompEnabled:   true
	I0719 15:15:02.832447   40893 command_runner.go:130] > AppArmorEnabled:  false
	I0719 15:15:02.832517   40893 ssh_runner.go:195] Run: crio --version
	I0719 15:15:02.859990   40893 command_runner.go:130] > crio version 1.29.1
	I0719 15:15:02.860016   40893 command_runner.go:130] > Version:        1.29.1
	I0719 15:15:02.860024   40893 command_runner.go:130] > GitCommit:      unknown
	I0719 15:15:02.860030   40893 command_runner.go:130] > GitCommitDate:  unknown
	I0719 15:15:02.860037   40893 command_runner.go:130] > GitTreeState:   clean
	I0719 15:15:02.860057   40893 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 15:15:02.860063   40893 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 15:15:02.860073   40893 command_runner.go:130] > Compiler:       gc
	I0719 15:15:02.860081   40893 command_runner.go:130] > Platform:       linux/amd64
	I0719 15:15:02.860090   40893 command_runner.go:130] > Linkmode:       dynamic
	I0719 15:15:02.860097   40893 command_runner.go:130] > BuildTags:      
	I0719 15:15:02.860108   40893 command_runner.go:130] >   containers_image_ostree_stub
	I0719 15:15:02.860116   40893 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 15:15:02.860123   40893 command_runner.go:130] >   btrfs_noversion
	I0719 15:15:02.860133   40893 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 15:15:02.860143   40893 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 15:15:02.860150   40893 command_runner.go:130] >   seccomp
	I0719 15:15:02.860159   40893 command_runner.go:130] > LDFlags:          unknown
	I0719 15:15:02.860165   40893 command_runner.go:130] > SeccompEnabled:   true
	I0719 15:15:02.860174   40893 command_runner.go:130] > AppArmorEnabled:  false
	I0719 15:15:02.863442   40893 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:15:02.864846   40893 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:15:02.867471   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:02.867916   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:02.867944   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:02.868121   40893 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:15:02.872406   40893 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0719 15:15:02.872477   40893 kubeadm.go:883] updating cluster {Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:15:02.872590   40893 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:15:02.872640   40893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:15:02.915715   40893 command_runner.go:130] > {
	I0719 15:15:02.915738   40893 command_runner.go:130] >   "images": [
	I0719 15:15:02.915744   40893 command_runner.go:130] >     {
	I0719 15:15:02.915761   40893 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 15:15:02.915767   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.915775   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 15:15:02.915781   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915795   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.915806   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 15:15:02.915817   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 15:15:02.915822   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915828   40893 command_runner.go:130] >       "size": "87165492",
	I0719 15:15:02.915832   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.915838   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.915847   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.915860   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.915869   40893 command_runner.go:130] >     },
	I0719 15:15:02.915875   40893 command_runner.go:130] >     {
	I0719 15:15:02.915887   40893 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 15:15:02.915901   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.915912   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 15:15:02.915918   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915925   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.915936   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 15:15:02.915952   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 15:15:02.915961   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915968   40893 command_runner.go:130] >       "size": "1363676",
	I0719 15:15:02.915976   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.915988   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.915998   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916007   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916014   40893 command_runner.go:130] >     },
	I0719 15:15:02.916022   40893 command_runner.go:130] >     {
	I0719 15:15:02.916032   40893 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 15:15:02.916042   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916051   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 15:15:02.916057   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916066   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916081   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 15:15:02.916097   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 15:15:02.916105   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916113   40893 command_runner.go:130] >       "size": "31470524",
	I0719 15:15:02.916122   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.916137   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916146   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916153   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916161   40893 command_runner.go:130] >     },
	I0719 15:15:02.916167   40893 command_runner.go:130] >     {
	I0719 15:15:02.916181   40893 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 15:15:02.916191   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916202   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 15:15:02.916209   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916217   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916230   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 15:15:02.916253   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 15:15:02.916261   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916270   40893 command_runner.go:130] >       "size": "61245718",
	I0719 15:15:02.916279   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.916288   40893 command_runner.go:130] >       "username": "nonroot",
	I0719 15:15:02.916295   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916304   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916310   40893 command_runner.go:130] >     },
	I0719 15:15:02.916318   40893 command_runner.go:130] >     {
	I0719 15:15:02.916329   40893 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 15:15:02.916338   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916348   40893 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 15:15:02.916357   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916364   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916378   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 15:15:02.916393   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 15:15:02.916401   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916408   40893 command_runner.go:130] >       "size": "150779692",
	I0719 15:15:02.916417   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.916424   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.916432   40893 command_runner.go:130] >       },
	I0719 15:15:02.916439   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916448   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916455   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916463   40893 command_runner.go:130] >     },
	I0719 15:15:02.916478   40893 command_runner.go:130] >     {
	I0719 15:15:02.916490   40893 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 15:15:02.916499   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916510   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 15:15:02.916518   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916526   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916541   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 15:15:02.916556   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 15:15:02.916564   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916572   40893 command_runner.go:130] >       "size": "117609954",
	I0719 15:15:02.916580   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.916588   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.916596   40893 command_runner.go:130] >       },
	I0719 15:15:02.916603   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916611   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916617   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916621   40893 command_runner.go:130] >     },
	I0719 15:15:02.916626   40893 command_runner.go:130] >     {
	I0719 15:15:02.916636   40893 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 15:15:02.916645   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916656   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 15:15:02.916662   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916670   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916686   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 15:15:02.916702   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 15:15:02.916710   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916718   40893 command_runner.go:130] >       "size": "112198984",
	I0719 15:15:02.916726   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.916733   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.916741   40893 command_runner.go:130] >       },
	I0719 15:15:02.916748   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916757   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916764   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916771   40893 command_runner.go:130] >     },
	I0719 15:15:02.916777   40893 command_runner.go:130] >     {
	I0719 15:15:02.916790   40893 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 15:15:02.916808   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916819   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 15:15:02.916827   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916834   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916867   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 15:15:02.916881   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 15:15:02.916887   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916894   40893 command_runner.go:130] >       "size": "85953945",
	I0719 15:15:02.916903   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.916910   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916917   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916927   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916931   40893 command_runner.go:130] >     },
	I0719 15:15:02.916935   40893 command_runner.go:130] >     {
	I0719 15:15:02.916944   40893 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 15:15:02.916952   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916961   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 15:15:02.916967   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916974   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916989   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 15:15:02.917004   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 15:15:02.917013   40893 command_runner.go:130] >       ],
	I0719 15:15:02.917020   40893 command_runner.go:130] >       "size": "63051080",
	I0719 15:15:02.917030   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.917039   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.917045   40893 command_runner.go:130] >       },
	I0719 15:15:02.917055   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.917062   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.917071   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.917079   40893 command_runner.go:130] >     },
	I0719 15:15:02.917086   40893 command_runner.go:130] >     {
	I0719 15:15:02.917097   40893 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 15:15:02.917105   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.917114   40893 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 15:15:02.917122   40893 command_runner.go:130] >       ],
	I0719 15:15:02.917129   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.917150   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 15:15:02.917165   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 15:15:02.917172   40893 command_runner.go:130] >       ],
	I0719 15:15:02.917180   40893 command_runner.go:130] >       "size": "750414",
	I0719 15:15:02.917188   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.917197   40893 command_runner.go:130] >         "value": "65535"
	I0719 15:15:02.917202   40893 command_runner.go:130] >       },
	I0719 15:15:02.917209   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.917219   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.917229   40893 command_runner.go:130] >       "pinned": true
	I0719 15:15:02.917236   40893 command_runner.go:130] >     }
	I0719 15:15:02.917242   40893 command_runner.go:130] >   ]
	I0719 15:15:02.917247   40893 command_runner.go:130] > }
	I0719 15:15:02.917421   40893 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:15:02.917434   40893 crio.go:433] Images already preloaded, skipping extraction
	I0719 15:15:02.917525   40893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:15:02.951905   40893 command_runner.go:130] > {
	I0719 15:15:02.951924   40893 command_runner.go:130] >   "images": [
	I0719 15:15:02.951928   40893 command_runner.go:130] >     {
	I0719 15:15:02.951936   40893 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 15:15:02.951941   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.951947   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 15:15:02.951950   40893 command_runner.go:130] >       ],
	I0719 15:15:02.951954   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.951962   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 15:15:02.951969   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 15:15:02.951972   40893 command_runner.go:130] >       ],
	I0719 15:15:02.951976   40893 command_runner.go:130] >       "size": "87165492",
	I0719 15:15:02.951980   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.951984   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.951992   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.951998   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952001   40893 command_runner.go:130] >     },
	I0719 15:15:02.952004   40893 command_runner.go:130] >     {
	I0719 15:15:02.952009   40893 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 15:15:02.952029   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952037   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 15:15:02.952041   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952045   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952052   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 15:15:02.952061   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 15:15:02.952066   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952070   40893 command_runner.go:130] >       "size": "1363676",
	I0719 15:15:02.952076   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952084   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952101   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952105   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952110   40893 command_runner.go:130] >     },
	I0719 15:15:02.952114   40893 command_runner.go:130] >     {
	I0719 15:15:02.952120   40893 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 15:15:02.952126   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952131   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 15:15:02.952145   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952151   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952161   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 15:15:02.952170   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 15:15:02.952176   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952181   40893 command_runner.go:130] >       "size": "31470524",
	I0719 15:15:02.952187   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952191   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952197   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952201   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952206   40893 command_runner.go:130] >     },
	I0719 15:15:02.952209   40893 command_runner.go:130] >     {
	I0719 15:15:02.952217   40893 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 15:15:02.952225   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952230   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 15:15:02.952235   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952239   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952249   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 15:15:02.952263   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 15:15:02.952274   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952280   40893 command_runner.go:130] >       "size": "61245718",
	I0719 15:15:02.952286   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952295   40893 command_runner.go:130] >       "username": "nonroot",
	I0719 15:15:02.952301   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952305   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952311   40893 command_runner.go:130] >     },
	I0719 15:15:02.952314   40893 command_runner.go:130] >     {
	I0719 15:15:02.952323   40893 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 15:15:02.952329   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952333   40893 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 15:15:02.952337   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952341   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952350   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 15:15:02.952359   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 15:15:02.952364   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952368   40893 command_runner.go:130] >       "size": "150779692",
	I0719 15:15:02.952374   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952377   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952381   40893 command_runner.go:130] >       },
	I0719 15:15:02.952387   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952390   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952396   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952399   40893 command_runner.go:130] >     },
	I0719 15:15:02.952405   40893 command_runner.go:130] >     {
	I0719 15:15:02.952411   40893 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 15:15:02.952416   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952421   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 15:15:02.952427   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952431   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952439   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 15:15:02.952448   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 15:15:02.952458   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952464   40893 command_runner.go:130] >       "size": "117609954",
	I0719 15:15:02.952468   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952475   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952482   40893 command_runner.go:130] >       },
	I0719 15:15:02.952488   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952492   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952498   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952501   40893 command_runner.go:130] >     },
	I0719 15:15:02.952506   40893 command_runner.go:130] >     {
	I0719 15:15:02.952512   40893 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 15:15:02.952518   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952523   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 15:15:02.952529   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952533   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952542   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 15:15:02.952551   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 15:15:02.952559   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952563   40893 command_runner.go:130] >       "size": "112198984",
	I0719 15:15:02.952568   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952572   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952577   40893 command_runner.go:130] >       },
	I0719 15:15:02.952581   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952587   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952591   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952598   40893 command_runner.go:130] >     },
	I0719 15:15:02.952602   40893 command_runner.go:130] >     {
	I0719 15:15:02.952608   40893 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 15:15:02.952614   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952620   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 15:15:02.952625   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952629   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952678   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 15:15:02.952690   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 15:15:02.952694   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952698   40893 command_runner.go:130] >       "size": "85953945",
	I0719 15:15:02.952704   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952708   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952713   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952717   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952727   40893 command_runner.go:130] >     },
	I0719 15:15:02.952733   40893 command_runner.go:130] >     {
	I0719 15:15:02.952739   40893 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 15:15:02.952745   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952749   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 15:15:02.952752   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952756   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952765   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 15:15:02.952774   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 15:15:02.952779   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952783   40893 command_runner.go:130] >       "size": "63051080",
	I0719 15:15:02.952789   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952792   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952798   40893 command_runner.go:130] >       },
	I0719 15:15:02.952802   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952808   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952812   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952817   40893 command_runner.go:130] >     },
	I0719 15:15:02.952820   40893 command_runner.go:130] >     {
	I0719 15:15:02.952828   40893 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 15:15:02.952832   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952838   40893 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 15:15:02.952842   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952852   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952859   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 15:15:02.952869   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 15:15:02.952875   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952879   40893 command_runner.go:130] >       "size": "750414",
	I0719 15:15:02.952885   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952889   40893 command_runner.go:130] >         "value": "65535"
	I0719 15:15:02.952892   40893 command_runner.go:130] >       },
	I0719 15:15:02.952896   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952901   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952905   40893 command_runner.go:130] >       "pinned": true
	I0719 15:15:02.952910   40893 command_runner.go:130] >     }
	I0719 15:15:02.952914   40893 command_runner.go:130] >   ]
	I0719 15:15:02.952923   40893 command_runner.go:130] > }
	I0719 15:15:02.955123   40893 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:15:02.955141   40893 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:15:02.955150   40893 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.30.3 crio true true} ...
	I0719 15:15:02.955268   40893 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-121443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:15:02.955356   40893 ssh_runner.go:195] Run: crio config
	I0719 15:15:02.988947   40893 command_runner.go:130] ! time="2024-07-19 15:15:02.967781103Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0719 15:15:02.995547   40893 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0719 15:15:03.001800   40893 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0719 15:15:03.001829   40893 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0719 15:15:03.001835   40893 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0719 15:15:03.001839   40893 command_runner.go:130] > #
	I0719 15:15:03.001845   40893 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0719 15:15:03.001852   40893 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0719 15:15:03.001857   40893 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0719 15:15:03.001866   40893 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0719 15:15:03.001870   40893 command_runner.go:130] > # reload'.
	I0719 15:15:03.001876   40893 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0719 15:15:03.001881   40893 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0719 15:15:03.001890   40893 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0719 15:15:03.001897   40893 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0719 15:15:03.001911   40893 command_runner.go:130] > [crio]
	I0719 15:15:03.001921   40893 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0719 15:15:03.001925   40893 command_runner.go:130] > # containers images, in this directory.
	I0719 15:15:03.001930   40893 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0719 15:15:03.001942   40893 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0719 15:15:03.001949   40893 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0719 15:15:03.001962   40893 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0719 15:15:03.001968   40893 command_runner.go:130] > # imagestore = ""
	I0719 15:15:03.001974   40893 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0719 15:15:03.001982   40893 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0719 15:15:03.001986   40893 command_runner.go:130] > storage_driver = "overlay"
	I0719 15:15:03.001993   40893 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0719 15:15:03.001998   40893 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0719 15:15:03.002004   40893 command_runner.go:130] > storage_option = [
	I0719 15:15:03.002008   40893 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0719 15:15:03.002012   40893 command_runner.go:130] > ]
	I0719 15:15:03.002021   40893 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0719 15:15:03.002034   40893 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0719 15:15:03.002040   40893 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0719 15:15:03.002046   40893 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0719 15:15:03.002054   40893 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0719 15:15:03.002064   40893 command_runner.go:130] > # always happen on a node reboot
	I0719 15:15:03.002071   40893 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0719 15:15:03.002088   40893 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0719 15:15:03.002096   40893 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0719 15:15:03.002101   40893 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0719 15:15:03.002105   40893 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0719 15:15:03.002112   40893 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0719 15:15:03.002121   40893 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0719 15:15:03.002127   40893 command_runner.go:130] > # internal_wipe = true
	I0719 15:15:03.002134   40893 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0719 15:15:03.002142   40893 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0719 15:15:03.002146   40893 command_runner.go:130] > # internal_repair = false
	I0719 15:15:03.002153   40893 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0719 15:15:03.002158   40893 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0719 15:15:03.002165   40893 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0719 15:15:03.002171   40893 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0719 15:15:03.002180   40893 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0719 15:15:03.002186   40893 command_runner.go:130] > [crio.api]
	I0719 15:15:03.002191   40893 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0719 15:15:03.002198   40893 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0719 15:15:03.002202   40893 command_runner.go:130] > # IP address on which the stream server will listen.
	I0719 15:15:03.002207   40893 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0719 15:15:03.002213   40893 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0719 15:15:03.002219   40893 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0719 15:15:03.002223   40893 command_runner.go:130] > # stream_port = "0"
	I0719 15:15:03.002228   40893 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0719 15:15:03.002247   40893 command_runner.go:130] > # stream_enable_tls = false
	I0719 15:15:03.002257   40893 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0719 15:15:03.002265   40893 command_runner.go:130] > # stream_idle_timeout = ""
	I0719 15:15:03.002271   40893 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0719 15:15:03.002279   40893 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0719 15:15:03.002282   40893 command_runner.go:130] > # minutes.
	I0719 15:15:03.002291   40893 command_runner.go:130] > # stream_tls_cert = ""
	I0719 15:15:03.002299   40893 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0719 15:15:03.002305   40893 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0719 15:15:03.002311   40893 command_runner.go:130] > # stream_tls_key = ""
	I0719 15:15:03.002317   40893 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0719 15:15:03.002325   40893 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0719 15:15:03.002343   40893 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0719 15:15:03.002349   40893 command_runner.go:130] > # stream_tls_ca = ""
	I0719 15:15:03.002356   40893 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 15:15:03.002363   40893 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0719 15:15:03.002370   40893 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 15:15:03.002376   40893 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0719 15:15:03.002382   40893 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0719 15:15:03.002390   40893 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0719 15:15:03.002394   40893 command_runner.go:130] > [crio.runtime]
	I0719 15:15:03.002401   40893 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0719 15:15:03.002408   40893 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0719 15:15:03.002412   40893 command_runner.go:130] > # "nofile=1024:2048"
	I0719 15:15:03.002420   40893 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0719 15:15:03.002424   40893 command_runner.go:130] > # default_ulimits = [
	I0719 15:15:03.002427   40893 command_runner.go:130] > # ]
	I0719 15:15:03.002435   40893 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0719 15:15:03.002439   40893 command_runner.go:130] > # no_pivot = false
	I0719 15:15:03.002448   40893 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0719 15:15:03.002456   40893 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0719 15:15:03.002460   40893 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0719 15:15:03.002467   40893 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0719 15:15:03.002472   40893 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0719 15:15:03.002481   40893 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 15:15:03.002487   40893 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0719 15:15:03.002491   40893 command_runner.go:130] > # Cgroup setting for conmon
	I0719 15:15:03.002498   40893 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0719 15:15:03.002504   40893 command_runner.go:130] > conmon_cgroup = "pod"
	I0719 15:15:03.002509   40893 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0719 15:15:03.002516   40893 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0719 15:15:03.002522   40893 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 15:15:03.002534   40893 command_runner.go:130] > conmon_env = [
	I0719 15:15:03.002542   40893 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 15:15:03.002546   40893 command_runner.go:130] > ]
	I0719 15:15:03.002552   40893 command_runner.go:130] > # Additional environment variables to set for all the
	I0719 15:15:03.002558   40893 command_runner.go:130] > # containers. These are overridden if set in the
	I0719 15:15:03.002564   40893 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0719 15:15:03.002569   40893 command_runner.go:130] > # default_env = [
	I0719 15:15:03.002572   40893 command_runner.go:130] > # ]
	I0719 15:15:03.002577   40893 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0719 15:15:03.002586   40893 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0719 15:15:03.002590   40893 command_runner.go:130] > # selinux = false
	I0719 15:15:03.002595   40893 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0719 15:15:03.002603   40893 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0719 15:15:03.002609   40893 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0719 15:15:03.002615   40893 command_runner.go:130] > # seccomp_profile = ""
	I0719 15:15:03.002620   40893 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0719 15:15:03.002628   40893 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0719 15:15:03.002633   40893 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0719 15:15:03.002640   40893 command_runner.go:130] > # which might increase security.
	I0719 15:15:03.002644   40893 command_runner.go:130] > # This option is currently deprecated,
	I0719 15:15:03.002653   40893 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0719 15:15:03.002661   40893 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0719 15:15:03.002667   40893 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0719 15:15:03.002675   40893 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0719 15:15:03.002683   40893 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0719 15:15:03.002691   40893 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0719 15:15:03.002696   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.002702   40893 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0719 15:15:03.002707   40893 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0719 15:15:03.002714   40893 command_runner.go:130] > # the cgroup blockio controller.
	I0719 15:15:03.002718   40893 command_runner.go:130] > # blockio_config_file = ""
	I0719 15:15:03.002726   40893 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0719 15:15:03.002730   40893 command_runner.go:130] > # blockio parameters.
	I0719 15:15:03.002736   40893 command_runner.go:130] > # blockio_reload = false
	I0719 15:15:03.002742   40893 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0719 15:15:03.002748   40893 command_runner.go:130] > # irqbalance daemon.
	I0719 15:15:03.002757   40893 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0719 15:15:03.002765   40893 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0719 15:15:03.002772   40893 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0719 15:15:03.002780   40893 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0719 15:15:03.002788   40893 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0719 15:15:03.002795   40893 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0719 15:15:03.002801   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.002806   40893 command_runner.go:130] > # rdt_config_file = ""
	I0719 15:15:03.002813   40893 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0719 15:15:03.002816   40893 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0719 15:15:03.002844   40893 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0719 15:15:03.002851   40893 command_runner.go:130] > # separate_pull_cgroup = ""
	I0719 15:15:03.002857   40893 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0719 15:15:03.002865   40893 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0719 15:15:03.002868   40893 command_runner.go:130] > # will be added.
	I0719 15:15:03.002874   40893 command_runner.go:130] > # default_capabilities = [
	I0719 15:15:03.002878   40893 command_runner.go:130] > # 	"CHOWN",
	I0719 15:15:03.002884   40893 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0719 15:15:03.002887   40893 command_runner.go:130] > # 	"FSETID",
	I0719 15:15:03.002892   40893 command_runner.go:130] > # 	"FOWNER",
	I0719 15:15:03.002895   40893 command_runner.go:130] > # 	"SETGID",
	I0719 15:15:03.002906   40893 command_runner.go:130] > # 	"SETUID",
	I0719 15:15:03.002911   40893 command_runner.go:130] > # 	"SETPCAP",
	I0719 15:15:03.002915   40893 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0719 15:15:03.002921   40893 command_runner.go:130] > # 	"KILL",
	I0719 15:15:03.002924   40893 command_runner.go:130] > # ]
	I0719 15:15:03.002935   40893 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0719 15:15:03.002942   40893 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0719 15:15:03.002950   40893 command_runner.go:130] > # add_inheritable_capabilities = false
	I0719 15:15:03.002957   40893 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0719 15:15:03.002965   40893 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 15:15:03.002969   40893 command_runner.go:130] > default_sysctls = [
	I0719 15:15:03.002975   40893 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0719 15:15:03.002978   40893 command_runner.go:130] > ]
	I0719 15:15:03.002982   40893 command_runner.go:130] > # List of devices on the host that a
	I0719 15:15:03.002990   40893 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0719 15:15:03.002998   40893 command_runner.go:130] > # allowed_devices = [
	I0719 15:15:03.003004   40893 command_runner.go:130] > # 	"/dev/fuse",
	I0719 15:15:03.003007   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003014   40893 command_runner.go:130] > # List of additional devices. specified as
	I0719 15:15:03.003021   40893 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0719 15:15:03.003028   40893 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0719 15:15:03.003034   40893 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 15:15:03.003040   40893 command_runner.go:130] > # additional_devices = [
	I0719 15:15:03.003043   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003050   40893 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0719 15:15:03.003054   40893 command_runner.go:130] > # cdi_spec_dirs = [
	I0719 15:15:03.003059   40893 command_runner.go:130] > # 	"/etc/cdi",
	I0719 15:15:03.003063   40893 command_runner.go:130] > # 	"/var/run/cdi",
	I0719 15:15:03.003066   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003071   40893 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0719 15:15:03.003079   40893 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0719 15:15:03.003083   40893 command_runner.go:130] > # Defaults to false.
	I0719 15:15:03.003089   40893 command_runner.go:130] > # device_ownership_from_security_context = false
	I0719 15:15:03.003095   40893 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0719 15:15:03.003102   40893 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0719 15:15:03.003106   40893 command_runner.go:130] > # hooks_dir = [
	I0719 15:15:03.003112   40893 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0719 15:15:03.003115   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003120   40893 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0719 15:15:03.003128   40893 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0719 15:15:03.003133   40893 command_runner.go:130] > # its default mounts from the following two files:
	I0719 15:15:03.003138   40893 command_runner.go:130] > #
	I0719 15:15:03.003144   40893 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0719 15:15:03.003151   40893 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0719 15:15:03.003157   40893 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0719 15:15:03.003161   40893 command_runner.go:130] > #
	I0719 15:15:03.003166   40893 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0719 15:15:03.003174   40893 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0719 15:15:03.003182   40893 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0719 15:15:03.003189   40893 command_runner.go:130] > #      only add mounts it finds in this file.
	I0719 15:15:03.003193   40893 command_runner.go:130] > #
	I0719 15:15:03.003201   40893 command_runner.go:130] > # default_mounts_file = ""
	I0719 15:15:03.003208   40893 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0719 15:15:03.003214   40893 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0719 15:15:03.003220   40893 command_runner.go:130] > pids_limit = 1024
	I0719 15:15:03.003229   40893 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0719 15:15:03.003237   40893 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0719 15:15:03.003246   40893 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0719 15:15:03.003253   40893 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0719 15:15:03.003259   40893 command_runner.go:130] > # log_size_max = -1
	I0719 15:15:03.003265   40893 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0719 15:15:03.003269   40893 command_runner.go:130] > # log_to_journald = false
	I0719 15:15:03.003277   40893 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0719 15:15:03.003281   40893 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0719 15:15:03.003287   40893 command_runner.go:130] > # Path to directory for container attach sockets.
	I0719 15:15:03.003292   40893 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0719 15:15:03.003299   40893 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0719 15:15:03.003303   40893 command_runner.go:130] > # bind_mount_prefix = ""
	I0719 15:15:03.003310   40893 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0719 15:15:03.003314   40893 command_runner.go:130] > # read_only = false
	I0719 15:15:03.003322   40893 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0719 15:15:03.003328   40893 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0719 15:15:03.003333   40893 command_runner.go:130] > # live configuration reload.
	I0719 15:15:03.003337   40893 command_runner.go:130] > # log_level = "info"
	I0719 15:15:03.003342   40893 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0719 15:15:03.003349   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.003353   40893 command_runner.go:130] > # log_filter = ""
	I0719 15:15:03.003361   40893 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0719 15:15:03.003370   40893 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0719 15:15:03.003374   40893 command_runner.go:130] > # separated by comma.
	I0719 15:15:03.003381   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003387   40893 command_runner.go:130] > # uid_mappings = ""
	I0719 15:15:03.003393   40893 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0719 15:15:03.003400   40893 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0719 15:15:03.003404   40893 command_runner.go:130] > # separated by comma.
	I0719 15:15:03.003413   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003421   40893 command_runner.go:130] > # gid_mappings = ""
	I0719 15:15:03.003431   40893 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0719 15:15:03.003439   40893 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 15:15:03.003444   40893 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 15:15:03.003453   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003458   40893 command_runner.go:130] > # minimum_mappable_uid = -1
	I0719 15:15:03.003463   40893 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0719 15:15:03.003472   40893 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 15:15:03.003478   40893 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 15:15:03.003487   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003491   40893 command_runner.go:130] > # minimum_mappable_gid = -1
	I0719 15:15:03.003497   40893 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0719 15:15:03.003505   40893 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0719 15:15:03.003510   40893 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0719 15:15:03.003515   40893 command_runner.go:130] > # ctr_stop_timeout = 30
	I0719 15:15:03.003520   40893 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0719 15:15:03.003526   40893 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0719 15:15:03.003530   40893 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0719 15:15:03.003537   40893 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0719 15:15:03.003540   40893 command_runner.go:130] > drop_infra_ctr = false
	I0719 15:15:03.003547   40893 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0719 15:15:03.003558   40893 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0719 15:15:03.003567   40893 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0719 15:15:03.003571   40893 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0719 15:15:03.003580   40893 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0719 15:15:03.003585   40893 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0719 15:15:03.003596   40893 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0719 15:15:03.003601   40893 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0719 15:15:03.003608   40893 command_runner.go:130] > # shared_cpuset = ""
	I0719 15:15:03.003613   40893 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0719 15:15:03.003620   40893 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0719 15:15:03.003625   40893 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0719 15:15:03.003633   40893 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0719 15:15:03.003639   40893 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0719 15:15:03.003644   40893 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0719 15:15:03.003659   40893 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0719 15:15:03.003666   40893 command_runner.go:130] > # enable_criu_support = false
	I0719 15:15:03.003675   40893 command_runner.go:130] > # Enable/disable the generation of the container,
	I0719 15:15:03.003683   40893 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0719 15:15:03.003687   40893 command_runner.go:130] > # enable_pod_events = false
	I0719 15:15:03.003693   40893 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 15:15:03.003705   40893 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 15:15:03.003712   40893 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0719 15:15:03.003716   40893 command_runner.go:130] > # default_runtime = "runc"
	I0719 15:15:03.003726   40893 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0719 15:15:03.003734   40893 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0719 15:15:03.003744   40893 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0719 15:15:03.003752   40893 command_runner.go:130] > # creation as a file is not desired either.
	I0719 15:15:03.003760   40893 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0719 15:15:03.003766   40893 command_runner.go:130] > # the hostname is being managed dynamically.
	I0719 15:15:03.003770   40893 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0719 15:15:03.003773   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003779   40893 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0719 15:15:03.003787   40893 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0719 15:15:03.003792   40893 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0719 15:15:03.003799   40893 command_runner.go:130] > # Each entry in the table should follow the format:
	I0719 15:15:03.003802   40893 command_runner.go:130] > #
	I0719 15:15:03.003807   40893 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0719 15:15:03.003812   40893 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0719 15:15:03.003855   40893 command_runner.go:130] > # runtime_type = "oci"
	I0719 15:15:03.003862   40893 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0719 15:15:03.003866   40893 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0719 15:15:03.003870   40893 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0719 15:15:03.003874   40893 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0719 15:15:03.003880   40893 command_runner.go:130] > # monitor_env = []
	I0719 15:15:03.003884   40893 command_runner.go:130] > # privileged_without_host_devices = false
	I0719 15:15:03.003892   40893 command_runner.go:130] > # allowed_annotations = []
	I0719 15:15:03.003899   40893 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0719 15:15:03.003908   40893 command_runner.go:130] > # Where:
	I0719 15:15:03.003913   40893 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0719 15:15:03.003921   40893 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0719 15:15:03.003928   40893 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0719 15:15:03.003936   40893 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0719 15:15:03.003946   40893 command_runner.go:130] > #   in $PATH.
	I0719 15:15:03.003955   40893 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0719 15:15:03.003959   40893 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0719 15:15:03.003966   40893 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0719 15:15:03.003972   40893 command_runner.go:130] > #   state.
	I0719 15:15:03.003978   40893 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0719 15:15:03.003985   40893 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0719 15:15:03.003991   40893 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0719 15:15:03.003998   40893 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0719 15:15:03.004003   40893 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0719 15:15:03.004011   40893 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0719 15:15:03.004016   40893 command_runner.go:130] > #   The currently recognized values are:
	I0719 15:15:03.004024   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0719 15:15:03.004033   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0719 15:15:03.004038   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0719 15:15:03.004044   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0719 15:15:03.004052   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0719 15:15:03.004060   40893 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0719 15:15:03.004066   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0719 15:15:03.004074   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0719 15:15:03.004079   40893 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0719 15:15:03.004087   40893 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0719 15:15:03.004091   40893 command_runner.go:130] > #   deprecated option "conmon".
	I0719 15:15:03.004100   40893 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0719 15:15:03.004104   40893 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0719 15:15:03.004113   40893 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0719 15:15:03.004119   40893 command_runner.go:130] > #   should be moved to the container's cgroup
	I0719 15:15:03.004125   40893 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0719 15:15:03.004132   40893 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0719 15:15:03.004139   40893 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0719 15:15:03.004145   40893 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0719 15:15:03.004149   40893 command_runner.go:130] > #
	I0719 15:15:03.004156   40893 command_runner.go:130] > # Using the seccomp notifier feature:
	I0719 15:15:03.004161   40893 command_runner.go:130] > #
	I0719 15:15:03.004169   40893 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0719 15:15:03.004175   40893 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0719 15:15:03.004185   40893 command_runner.go:130] > #
	I0719 15:15:03.004193   40893 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0719 15:15:03.004199   40893 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0719 15:15:03.004203   40893 command_runner.go:130] > #
	I0719 15:15:03.004208   40893 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0719 15:15:03.004211   40893 command_runner.go:130] > # feature.
	I0719 15:15:03.004214   40893 command_runner.go:130] > #
	I0719 15:15:03.004220   40893 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0719 15:15:03.004228   40893 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0719 15:15:03.004234   40893 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0719 15:15:03.004241   40893 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0719 15:15:03.004249   40893 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0719 15:15:03.004254   40893 command_runner.go:130] > #
	I0719 15:15:03.004259   40893 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0719 15:15:03.004267   40893 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0719 15:15:03.004270   40893 command_runner.go:130] > #
	I0719 15:15:03.004276   40893 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0719 15:15:03.004283   40893 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0719 15:15:03.004286   40893 command_runner.go:130] > #
	I0719 15:15:03.004292   40893 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0719 15:15:03.004300   40893 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0719 15:15:03.004304   40893 command_runner.go:130] > # limitation.
	I0719 15:15:03.004308   40893 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0719 15:15:03.004314   40893 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0719 15:15:03.004319   40893 command_runner.go:130] > runtime_type = "oci"
	I0719 15:15:03.004325   40893 command_runner.go:130] > runtime_root = "/run/runc"
	I0719 15:15:03.004329   40893 command_runner.go:130] > runtime_config_path = ""
	I0719 15:15:03.004335   40893 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0719 15:15:03.004339   40893 command_runner.go:130] > monitor_cgroup = "pod"
	I0719 15:15:03.004345   40893 command_runner.go:130] > monitor_exec_cgroup = ""
	I0719 15:15:03.004349   40893 command_runner.go:130] > monitor_env = [
	I0719 15:15:03.004355   40893 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 15:15:03.004359   40893 command_runner.go:130] > ]
	I0719 15:15:03.004364   40893 command_runner.go:130] > privileged_without_host_devices = false
	I0719 15:15:03.004372   40893 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0719 15:15:03.004377   40893 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0719 15:15:03.004389   40893 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0719 15:15:03.004402   40893 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0719 15:15:03.004413   40893 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0719 15:15:03.004420   40893 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0719 15:15:03.004429   40893 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0719 15:15:03.004438   40893 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0719 15:15:03.004445   40893 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0719 15:15:03.004452   40893 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0719 15:15:03.004457   40893 command_runner.go:130] > # Example:
	I0719 15:15:03.004462   40893 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0719 15:15:03.004468   40893 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0719 15:15:03.004472   40893 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0719 15:15:03.004477   40893 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0719 15:15:03.004483   40893 command_runner.go:130] > # cpuset = 0
	I0719 15:15:03.004487   40893 command_runner.go:130] > # cpushares = "0-1"
	I0719 15:15:03.004491   40893 command_runner.go:130] > # Where:
	I0719 15:15:03.004495   40893 command_runner.go:130] > # The workload name is workload-type.
	I0719 15:15:03.004503   40893 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0719 15:15:03.004510   40893 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0719 15:15:03.004517   40893 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0719 15:15:03.004525   40893 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0719 15:15:03.004532   40893 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0719 15:15:03.004537   40893 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0719 15:15:03.004543   40893 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0719 15:15:03.004549   40893 command_runner.go:130] > # Default value is set to true
	I0719 15:15:03.004554   40893 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0719 15:15:03.004561   40893 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0719 15:15:03.004567   40893 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0719 15:15:03.004574   40893 command_runner.go:130] > # Default value is set to 'false'
	I0719 15:15:03.004579   40893 command_runner.go:130] > # disable_hostport_mapping = false
	I0719 15:15:03.004587   40893 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0719 15:15:03.004590   40893 command_runner.go:130] > #
	I0719 15:15:03.004595   40893 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0719 15:15:03.004600   40893 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0719 15:15:03.004606   40893 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0719 15:15:03.004611   40893 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0719 15:15:03.004622   40893 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0719 15:15:03.004626   40893 command_runner.go:130] > [crio.image]
	I0719 15:15:03.004631   40893 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0719 15:15:03.004635   40893 command_runner.go:130] > # default_transport = "docker://"
	I0719 15:15:03.004640   40893 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0719 15:15:03.004645   40893 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0719 15:15:03.004649   40893 command_runner.go:130] > # global_auth_file = ""
	I0719 15:15:03.004655   40893 command_runner.go:130] > # The image used to instantiate infra containers.
	I0719 15:15:03.004660   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.004664   40893 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0719 15:15:03.004669   40893 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0719 15:15:03.004674   40893 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0719 15:15:03.004679   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.004683   40893 command_runner.go:130] > # pause_image_auth_file = ""
	I0719 15:15:03.004687   40893 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0719 15:15:03.004693   40893 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0719 15:15:03.004698   40893 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0719 15:15:03.004703   40893 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0719 15:15:03.004707   40893 command_runner.go:130] > # pause_command = "/pause"
	I0719 15:15:03.004712   40893 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0719 15:15:03.004717   40893 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0719 15:15:03.004722   40893 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0719 15:15:03.004728   40893 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0719 15:15:03.004733   40893 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0719 15:15:03.004739   40893 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0719 15:15:03.004742   40893 command_runner.go:130] > # pinned_images = [
	I0719 15:15:03.004745   40893 command_runner.go:130] > # ]
	I0719 15:15:03.004750   40893 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0719 15:15:03.004756   40893 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0719 15:15:03.004761   40893 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0719 15:15:03.004766   40893 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0719 15:15:03.004773   40893 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0719 15:15:03.004776   40893 command_runner.go:130] > # signature_policy = ""
	I0719 15:15:03.004781   40893 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0719 15:15:03.004787   40893 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0719 15:15:03.004793   40893 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0719 15:15:03.004809   40893 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0719 15:15:03.004820   40893 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0719 15:15:03.004829   40893 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0719 15:15:03.004837   40893 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0719 15:15:03.004847   40893 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0719 15:15:03.004851   40893 command_runner.go:130] > # changing them here.
	I0719 15:15:03.004859   40893 command_runner.go:130] > # insecure_registries = [
	I0719 15:15:03.004863   40893 command_runner.go:130] > # ]
	I0719 15:15:03.004871   40893 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0719 15:15:03.004880   40893 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0719 15:15:03.004889   40893 command_runner.go:130] > # image_volumes = "mkdir"
	I0719 15:15:03.004904   40893 command_runner.go:130] > # Temporary directory to use for storing big files
	I0719 15:15:03.004911   40893 command_runner.go:130] > # big_files_temporary_dir = ""
	I0719 15:15:03.004917   40893 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0719 15:15:03.004923   40893 command_runner.go:130] > # CNI plugins.
	I0719 15:15:03.004927   40893 command_runner.go:130] > [crio.network]
	I0719 15:15:03.004935   40893 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0719 15:15:03.004940   40893 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0719 15:15:03.004947   40893 command_runner.go:130] > # cni_default_network = ""
	I0719 15:15:03.004952   40893 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0719 15:15:03.004960   40893 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0719 15:15:03.004965   40893 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0719 15:15:03.004971   40893 command_runner.go:130] > # plugin_dirs = [
	I0719 15:15:03.004974   40893 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0719 15:15:03.004980   40893 command_runner.go:130] > # ]
	I0719 15:15:03.004985   40893 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0719 15:15:03.004991   40893 command_runner.go:130] > [crio.metrics]
	I0719 15:15:03.004995   40893 command_runner.go:130] > # Globally enable or disable metrics support.
	I0719 15:15:03.005001   40893 command_runner.go:130] > enable_metrics = true
	I0719 15:15:03.005005   40893 command_runner.go:130] > # Specify enabled metrics collectors.
	I0719 15:15:03.005012   40893 command_runner.go:130] > # Per default all metrics are enabled.
	I0719 15:15:03.005018   40893 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0719 15:15:03.005026   40893 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0719 15:15:03.005031   40893 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0719 15:15:03.005039   40893 command_runner.go:130] > # metrics_collectors = [
	I0719 15:15:03.005044   40893 command_runner.go:130] > # 	"operations",
	I0719 15:15:03.005054   40893 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0719 15:15:03.005061   40893 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0719 15:15:03.005065   40893 command_runner.go:130] > # 	"operations_errors",
	I0719 15:15:03.005071   40893 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0719 15:15:03.005075   40893 command_runner.go:130] > # 	"image_pulls_by_name",
	I0719 15:15:03.005082   40893 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0719 15:15:03.005088   40893 command_runner.go:130] > # 	"image_pulls_failures",
	I0719 15:15:03.005093   40893 command_runner.go:130] > # 	"image_pulls_successes",
	I0719 15:15:03.005097   40893 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0719 15:15:03.005103   40893 command_runner.go:130] > # 	"image_layer_reuse",
	I0719 15:15:03.005107   40893 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0719 15:15:03.005113   40893 command_runner.go:130] > # 	"containers_oom_total",
	I0719 15:15:03.005117   40893 command_runner.go:130] > # 	"containers_oom",
	I0719 15:15:03.005120   40893 command_runner.go:130] > # 	"processes_defunct",
	I0719 15:15:03.005126   40893 command_runner.go:130] > # 	"operations_total",
	I0719 15:15:03.005130   40893 command_runner.go:130] > # 	"operations_latency_seconds",
	I0719 15:15:03.005135   40893 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0719 15:15:03.005139   40893 command_runner.go:130] > # 	"operations_errors_total",
	I0719 15:15:03.005145   40893 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0719 15:15:03.005149   40893 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0719 15:15:03.005155   40893 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0719 15:15:03.005160   40893 command_runner.go:130] > # 	"image_pulls_success_total",
	I0719 15:15:03.005166   40893 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0719 15:15:03.005170   40893 command_runner.go:130] > # 	"containers_oom_count_total",
	I0719 15:15:03.005176   40893 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0719 15:15:03.005185   40893 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0719 15:15:03.005190   40893 command_runner.go:130] > # ]
	I0719 15:15:03.005195   40893 command_runner.go:130] > # The port on which the metrics server will listen.
	I0719 15:15:03.005200   40893 command_runner.go:130] > # metrics_port = 9090
	I0719 15:15:03.005205   40893 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0719 15:15:03.005208   40893 command_runner.go:130] > # metrics_socket = ""
	I0719 15:15:03.005215   40893 command_runner.go:130] > # The certificate for the secure metrics server.
	I0719 15:15:03.005221   40893 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0719 15:15:03.005229   40893 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0719 15:15:03.005233   40893 command_runner.go:130] > # certificate on any modification event.
	I0719 15:15:03.005237   40893 command_runner.go:130] > # metrics_cert = ""
	I0719 15:15:03.005328   40893 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0719 15:15:03.005461   40893 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0719 15:15:03.005474   40893 command_runner.go:130] > # metrics_key = ""
	I0719 15:15:03.005497   40893 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0719 15:15:03.005504   40893 command_runner.go:130] > [crio.tracing]
	I0719 15:15:03.005519   40893 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0719 15:15:03.005531   40893 command_runner.go:130] > # enable_tracing = false
	I0719 15:15:03.005578   40893 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0719 15:15:03.005629   40893 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0719 15:15:03.005649   40893 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0719 15:15:03.005662   40893 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0719 15:15:03.005679   40893 command_runner.go:130] > # CRI-O NRI configuration.
	I0719 15:15:03.005684   40893 command_runner.go:130] > [crio.nri]
	I0719 15:15:03.005691   40893 command_runner.go:130] > # Globally enable or disable NRI.
	I0719 15:15:03.005700   40893 command_runner.go:130] > # enable_nri = false
	I0719 15:15:03.005709   40893 command_runner.go:130] > # NRI socket to listen on.
	I0719 15:15:03.005721   40893 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0719 15:15:03.005728   40893 command_runner.go:130] > # NRI plugin directory to use.
	I0719 15:15:03.005735   40893 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0719 15:15:03.005743   40893 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0719 15:15:03.005755   40893 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0719 15:15:03.005763   40893 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0719 15:15:03.005769   40893 command_runner.go:130] > # nri_disable_connections = false
	I0719 15:15:03.005777   40893 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0719 15:15:03.005789   40893 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0719 15:15:03.005796   40893 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0719 15:15:03.005803   40893 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0719 15:15:03.005821   40893 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0719 15:15:03.005827   40893 command_runner.go:130] > [crio.stats]
	I0719 15:15:03.005836   40893 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0719 15:15:03.005844   40893 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0719 15:15:03.005856   40893 command_runner.go:130] > # stats_collection_period = 0
	I0719 15:15:03.006082   40893 cni.go:84] Creating CNI manager for ""
	I0719 15:15:03.006095   40893 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 15:15:03.006111   40893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:15:03.006147   40893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-121443 NodeName:multinode-121443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:15:03.006415   40893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-121443"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:15:03.006496   40893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:15:03.018800   40893 command_runner.go:130] > kubeadm
	I0719 15:15:03.018826   40893 command_runner.go:130] > kubectl
	I0719 15:15:03.018833   40893 command_runner.go:130] > kubelet
	I0719 15:15:03.018854   40893 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:15:03.018956   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:15:03.029627   40893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0719 15:15:03.046862   40893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:15:03.064137   40893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 15:15:03.080619   40893 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0719 15:15:03.084525   40893 command_runner.go:130] > 192.168.39.32	control-plane.minikube.internal
	I0719 15:15:03.084595   40893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:15:03.220569   40893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:15:03.236315   40893 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443 for IP: 192.168.39.32
	I0719 15:15:03.236340   40893 certs.go:194] generating shared ca certs ...
	I0719 15:15:03.236370   40893 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:15:03.236513   40893 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:15:03.236550   40893 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:15:03.236559   40893 certs.go:256] generating profile certs ...
	I0719 15:15:03.236654   40893 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/client.key
	I0719 15:15:03.236708   40893 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.key.e7ca767b
	I0719 15:15:03.236745   40893 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.key
	I0719 15:15:03.236755   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 15:15:03.236770   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 15:15:03.236782   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 15:15:03.236795   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 15:15:03.236804   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 15:15:03.236818   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 15:15:03.236831   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 15:15:03.236842   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 15:15:03.236888   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:15:03.236916   40893 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:15:03.236926   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:15:03.236946   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:15:03.236967   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:15:03.236987   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:15:03.237022   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:15:03.237047   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.237059   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.237072   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.237733   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:15:03.262858   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:15:03.287541   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:15:03.314697   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:15:03.339389   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:15:03.362633   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:15:03.386069   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:15:03.409114   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:15:03.432252   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:15:03.456053   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:15:03.481819   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:15:03.505986   40893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:15:03.523239   40893 ssh_runner.go:195] Run: openssl version
	I0719 15:15:03.529542   40893 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 15:15:03.529602   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:15:03.541105   40893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.545516   40893 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.545540   40893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.545576   40893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.551087   40893 command_runner.go:130] > b5213941
	I0719 15:15:03.551226   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:15:03.561292   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:15:03.572703   40893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.577322   40893 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.577356   40893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.577394   40893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.583490   40893 command_runner.go:130] > 51391683
	I0719 15:15:03.583554   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:15:03.594272   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:15:03.606354   40893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.610887   40893 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.610916   40893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.610956   40893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.616617   40893 command_runner.go:130] > 3ec20f2e
	I0719 15:15:03.616700   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:15:03.627166   40893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:15:03.632433   40893 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:15:03.632463   40893 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 15:15:03.632471   40893 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0719 15:15:03.632480   40893 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 15:15:03.632491   40893 command_runner.go:130] > Access: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632502   40893 command_runner.go:130] > Modify: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632510   40893 command_runner.go:130] > Change: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632517   40893 command_runner.go:130] >  Birth: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632564   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:15:03.638428   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.638495   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:15:03.643992   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.644162   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:15:03.649767   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.649943   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:15:03.655723   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.655876   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:15:03.661552   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.661769   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:15:03.667625   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.667696   40893 kubeadm.go:392] StartCluster: {Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:15:03.667809   40893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:15:03.667854   40893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:15:03.706139   40893 command_runner.go:130] > dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f
	I0719 15:15:03.706160   40893 command_runner.go:130] > 502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f
	I0719 15:15:03.706166   40893 command_runner.go:130] > 4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24
	I0719 15:15:03.706174   40893 command_runner.go:130] > fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6
	I0719 15:15:03.706186   40893 command_runner.go:130] > 5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf
	I0719 15:15:03.706194   40893 command_runner.go:130] > d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c
	I0719 15:15:03.706205   40893 command_runner.go:130] > 713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8
	I0719 15:15:03.706218   40893 command_runner.go:130] > b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66
	I0719 15:15:03.706255   40893 cri.go:89] found id: "dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f"
	I0719 15:15:03.706267   40893 cri.go:89] found id: "502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f"
	I0719 15:15:03.706273   40893 cri.go:89] found id: "4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24"
	I0719 15:15:03.706278   40893 cri.go:89] found id: "fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6"
	I0719 15:15:03.706282   40893 cri.go:89] found id: "5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf"
	I0719 15:15:03.706287   40893 cri.go:89] found id: "d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c"
	I0719 15:15:03.706292   40893 cri.go:89] found id: "713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8"
	I0719 15:15:03.706296   40893 cri.go:89] found id: "b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66"
	I0719 15:15:03.706301   40893 cri.go:89] found id: ""
	I0719 15:15:03.706343   40893 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.841741148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402211841717775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54b7758e-7217-4efd-9916-a9724664fbc3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.842247394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae2d1d33-147e-4bf9-8f4d-1d6b77e045c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.842318996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae2d1d33-147e-4bf9-8f4d-1d6b77e045c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.842698281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae2d1d33-147e-4bf9-8f4d-1d6b77e045c1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.886859960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03c8a7c0-1600-4b13-a586-dfed34e2aaed name=/runtime.v1.RuntimeService/Version
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.886935756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03c8a7c0-1600-4b13-a586-dfed34e2aaed name=/runtime.v1.RuntimeService/Version
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.887971262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ba7b971-deee-45e0-9487-92c517494cf3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.888342472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402211888320539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ba7b971-deee-45e0-9487-92c517494cf3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.888945607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9c445c3-6b47-49b3-ad1c-83d112ee3242 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.889029004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9c445c3-6b47-49b3-ad1c-83d112ee3242 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.889362866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9c445c3-6b47-49b3-ad1c-83d112ee3242 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.933563960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78b7ba2f-bbce-4254-b61d-d64583f5911c name=/runtime.v1.RuntimeService/Version
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.933637683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78b7ba2f-bbce-4254-b61d-d64583f5911c name=/runtime.v1.RuntimeService/Version
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.941726968Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64683440-c139-455f-86c0-c3476ff8515a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.942353676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402211942294112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64683440-c139-455f-86c0-c3476ff8515a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.942953477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39d0630b-c6b0-4093-bc0d-858ba5bae30c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.943011862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39d0630b-c6b0-4093-bc0d-858ba5bae30c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:51 multinode-121443 crio[2888]: time="2024-07-19 15:16:51.943605931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39d0630b-c6b0-4093-bc0d-858ba5bae30c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.002770059Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b17361ba-3caa-4b6f-8988-88c2bf955912 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.002844072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b17361ba-3caa-4b6f-8988-88c2bf955912 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.003693792Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ef086d1-c3e3-46f6-b40b-07b1a2442708 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.004279669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402212004253026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ef086d1-c3e3-46f6-b40b-07b1a2442708 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.004751284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d85740f-0778-44fa-bc3c-8c45234f24f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.004823823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d85740f-0778-44fa-bc3c-8c45234f24f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:16:52 multinode-121443 crio[2888]: time="2024-07-19 15:16:52.005218705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d85740f-0778-44fa-bc3c-8c45234f24f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6730504f064a0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   cd76fcf9d96bc       busybox-fc5497c4f-9h6kk
	b675e4fd02893       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      About a minute ago   Running             kindnet-cni               1                   83ee156bab4c1       kindnet-5zklk
	3f96708926155       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   5ec847a290916       coredns-7db6d8ff4d-n7t8w
	4359892cfd8b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5f4e88c7fcb7c       storage-provisioner
	221882dff6c78       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   0dc484e71d384       kube-proxy-lfgrb
	c8218905c2e97       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   e53ea24eb162b       etcd-multinode-121443
	293204b7a9058       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   ece4c48537ed9       kube-controller-manager-multinode-121443
	fa302c97d4878       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   5499a35d5feaa       kube-scheduler-multinode-121443
	e2dc920afc846       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   d1b8609f9f04e       kube-apiserver-multinode-121443
	9ed0e950737fb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   68c7daeecb458       busybox-fc5497c4f-9h6kk
	dc5476a467779       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   4b7d7142af3bd       coredns-7db6d8ff4d-n7t8w
	502bf142ce57c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   e713a189f41ef       storage-provisioner
	4bad540742a5a       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      8 minutes ago        Exited              kindnet-cni               0                   d27205d1dc010       kindnet-5zklk
	fca6c86e8784a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   208cafba1ee95       kube-proxy-lfgrb
	5fe052e6e6bde       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   df453b9ddc913       etcd-multinode-121443
	d5d6d5432c5ba       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   eed93d6bc6357       kube-controller-manager-multinode-121443
	713959ddae427       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   8c95efc9309d5       kube-apiserver-multinode-121443
	b48a01b01787f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   5cc05efdd9692       kube-scheduler-multinode-121443
	
	
	==> coredns [3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48649 - 2885 "HINFO IN 1538708654439137221.1084675691582257083. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016121561s
	
	
	==> coredns [dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f] <==
	[INFO] 10.244.1.2:57105 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001844812s
	[INFO] 10.244.1.2:43685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108904s
	[INFO] 10.244.1.2:45894 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105631s
	[INFO] 10.244.1.2:47255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001257652s
	[INFO] 10.244.1.2:37025 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062341s
	[INFO] 10.244.1.2:47296 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053579s
	[INFO] 10.244.1.2:37030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100183s
	[INFO] 10.244.0.3:40198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130982s
	[INFO] 10.244.0.3:58455 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064572s
	[INFO] 10.244.0.3:36902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083042s
	[INFO] 10.244.0.3:56286 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057483s
	[INFO] 10.244.1.2:35076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101714s
	[INFO] 10.244.1.2:49410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134587s
	[INFO] 10.244.1.2:48107 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066565s
	[INFO] 10.244.1.2:59682 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107312s
	[INFO] 10.244.0.3:50711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079336s
	[INFO] 10.244.0.3:52831 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112951s
	[INFO] 10.244.0.3:43664 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070195s
	[INFO] 10.244.0.3:57699 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126789s
	[INFO] 10.244.1.2:37267 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205656s
	[INFO] 10.244.1.2:49685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109151s
	[INFO] 10.244.1.2:40234 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100147s
	[INFO] 10.244.1.2:50205 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066778s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-121443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-121443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=multinode-121443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_08_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:08:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-121443
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:16:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-121443
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8590a9863ad4803a0c4feae042fd645
	  System UUID:                b8590a98-63ad-4803-a0c4-feae042fd645
	  Boot ID:                    acc7e1ed-b057-4c29-a709-81cc8cb1ff0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9h6kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 coredns-7db6d8ff4d-n7t8w                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m21s
	  kube-system                 etcd-multinode-121443                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m35s
	  kube-system                 kindnet-5zklk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m22s
	  kube-system                 kube-apiserver-multinode-121443             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-controller-manager-multinode-121443    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-proxy-lfgrb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-scheduler-multinode-121443             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m19s                kube-proxy       
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m35s                kubelet          Node multinode-121443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m35s                kubelet          Node multinode-121443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s                kubelet          Node multinode-121443 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m35s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                node-controller  Node multinode-121443 event: Registered Node multinode-121443 in Controller
	  Normal  NodeReady                8m9s                 kubelet          Node multinode-121443 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node multinode-121443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node multinode-121443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node multinode-121443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                  node-controller  Node multinode-121443 event: Registered Node multinode-121443 in Controller
	
	
	Name:               multinode-121443-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-121443-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=multinode-121443
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T15_15_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:15:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-121443-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:15:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:15:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:15:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:16:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    multinode-121443-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e1fb89433da44eeaf6316cfc1c10470
	  System UUID:                9e1fb894-33da-44ee-af63-16cfc1c10470
	  Boot ID:                    0ec49468-fdfc-4697-a6b7-db70fa7c24fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sr7hh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-5lddz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m37s
	  kube-system                 kube-proxy-gvgth           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m32s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m37s (x3 over 7m38s)  kubelet     Node multinode-121443-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m37s (x3 over 7m38s)  kubelet     Node multinode-121443-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m37s (x3 over 7m38s)  kubelet     Node multinode-121443-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m18s                  kubelet     Node multinode-121443-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-121443-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-121443-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-121443-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-121443-m02 status is now: NodeReady
	
	
	Name:               multinode-121443-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-121443-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=multinode-121443
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T15_16_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:16:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-121443-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:16:49 +0000   Fri, 19 Jul 2024 15:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:16:49 +0000   Fri, 19 Jul 2024 15:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:16:49 +0000   Fri, 19 Jul 2024 15:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:16:49 +0000   Fri, 19 Jul 2024 15:16:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    multinode-121443-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 65ebaaaa0b944f188f580a4a52588ec2
	  System UUID:                65ebaaaa-0b94-4f18-8f58-0a4a52588ec2
	  Boot ID:                    7cd26fba-f599-41ea-ad25-30c9bc8f06ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fnr7q       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m37s
	  kube-system                 kube-proxy-hdr4s    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m31s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m43s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m37s (x2 over 6m37s)  kubelet     Node multinode-121443-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x2 over 6m37s)  kubelet     Node multinode-121443-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x2 over 6m37s)  kubelet     Node multinode-121443-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m17s                  kubelet     Node multinode-121443-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-121443-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-121443-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-121443-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m28s                  kubelet     Node multinode-121443-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-121443-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-121443-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-121443-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-121443-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.061431] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049950] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.174577] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.144673] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.288594] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.252066] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.530601] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.064557] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989589] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.084264] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.997931] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.094934] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[ +13.142270] kauditd_printk_skb: 60 callbacks suppressed
	[Jul19 15:09] kauditd_printk_skb: 12 callbacks suppressed
	[Jul19 15:15] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.158606] systemd-fstab-generator[2817]: Ignoring "noauto" option for root device
	[  +0.181983] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.159953] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.286473] systemd-fstab-generator[2871]: Ignoring "noauto" option for root device
	[  +0.816751] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[  +2.262865] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +4.619606] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.875153] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.969547] systemd-fstab-generator[3931]: Ignoring "noauto" option for root device
	[ +16.965209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf] <==
	{"level":"warn","ts":"2024-07-19T15:09:14.868934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.040131ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316538273496601209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-rnd69\" mod_revision:441 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-rnd69\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-rnd69\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T15:09:14.869317Z","caller":"traceutil/trace.go:171","msg":"trace[760675968] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"116.730086ms","start":"2024-07-19T15:09:14.752553Z","end":"2024-07-19T15:09:14.869283Z","steps":["trace[760675968] 'compare'  (duration: 113.854721ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:09:15.005176Z","caller":"traceutil/trace.go:171","msg":"trace[1572115174] linearizableReadLoop","detail":"{readStateIndex:466; appliedIndex:465; }","duration":"105.623535ms","start":"2024-07-19T15:09:14.899538Z","end":"2024-07-19T15:09:15.005161Z","steps":["trace[1572115174] 'read index received'  (duration: 104.56728ms)","trace[1572115174] 'applied index is now lower than readState.Index'  (duration: 1.055611ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T15:09:15.005264Z","caller":"traceutil/trace.go:171","msg":"trace[166726604] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"131.167486ms","start":"2024-07-19T15:09:14.874091Z","end":"2024-07-19T15:09:15.005258Z","steps":["trace[166726604] 'process raft request'  (duration: 130.186366ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:09:15.005515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.869098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-121443-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T15:09:15.005553Z","caller":"traceutil/trace.go:171","msg":"trace[1300675654] range","detail":"{range_begin:/registry/minions/multinode-121443-m02; range_end:; response_count:0; response_revision:443; }","duration":"106.057329ms","start":"2024-07-19T15:09:14.89949Z","end":"2024-07-19T15:09:15.005547Z","steps":["trace[1300675654] 'agreement among raft nodes before linearized reading'  (duration: 105.876174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:09:20.091469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.709247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T15:09:20.091537Z","caller":"traceutil/trace.go:171","msg":"trace[1183754555] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:480; }","duration":"159.862155ms","start":"2024-07-19T15:09:19.931659Z","end":"2024-07-19T15:09:20.091522Z","steps":["trace[1183754555] 'count revisions from in-memory index tree'  (duration: 159.640238ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:09:20.091758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.286032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-121443-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-07-19T15:09:20.091799Z","caller":"traceutil/trace.go:171","msg":"trace[1650685763] range","detail":"{range_begin:/registry/minions/multinode-121443-m02; range_end:; response_count:1; response_revision:480; }","duration":"201.329204ms","start":"2024-07-19T15:09:19.890462Z","end":"2024-07-19T15:09:20.091791Z","steps":["trace[1650685763] 'range keys from in-memory index tree'  (duration: 201.188902ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:10:15.861948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.741147ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316538273496601688 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-121443-m03.17e3a5d4b6616e41\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-121443-m03.17e3a5d4b6616e41\" value_size:646 lease:7316538273496601280 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T15:10:15.862534Z","caller":"traceutil/trace.go:171","msg":"trace[1149755698] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"247.661244ms","start":"2024-07-19T15:10:15.61486Z","end":"2024-07-19T15:10:15.862521Z","steps":["trace[1149755698] 'process raft request'  (duration: 78.305828ms)","trace[1149755698] 'compare'  (duration: 168.632811ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T15:10:15.862846Z","caller":"traceutil/trace.go:171","msg":"trace[477100261] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"176.181093ms","start":"2024-07-19T15:10:15.686654Z","end":"2024-07-19T15:10:15.862835Z","steps":["trace[477100261] 'process raft request'  (duration: 175.794065ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:10:17.496844Z","caller":"traceutil/trace.go:171","msg":"trace[1953329838] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"180.89928ms","start":"2024-07-19T15:10:17.31593Z","end":"2024-07-19T15:10:17.496829Z","steps":["trace[1953329838] 'process raft request'  (duration: 180.80395ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:10:17.686372Z","caller":"traceutil/trace.go:171","msg":"trace[388183798] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"102.666999ms","start":"2024-07-19T15:10:17.58369Z","end":"2024-07-19T15:10:17.686357Z","steps":["trace[388183798] 'process raft request'  (duration: 101.655209ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:13:30.132174Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T15:13:30.132371Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-121443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	{"level":"warn","ts":"2024-07-19T15:13:30.132519Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T15:13:30.13261Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T15:13:30.183978Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T15:13:30.184053Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T15:13:30.184133Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4c05646b7156589","current-leader-member-id":"d4c05646b7156589"}
	{"level":"info","ts":"2024-07-19T15:13:30.188681Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:13:30.188851Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:13:30.188881Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-121443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	
	
	==> etcd [c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442] <==
	{"level":"info","ts":"2024-07-19T15:15:06.860873Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:15:06.860883Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:15:06.861134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 switched to configuration voters=(15330347993288500617)"}
	{"level":"info","ts":"2024-07-19T15:15:06.861211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-07-19T15:15:06.861357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:15:06.861454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:15:06.86937Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:15:06.880486Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:15:06.888075Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:15:06.897552Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:15:06.899465Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:15:08.160674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:15:08.160759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:15:08.160786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-07-19T15:15:08.160799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.160809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.16082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.16083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.169566Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-121443 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:15:08.169793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:15:08.169936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:15:08.169998Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:15:08.170089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:15:08.172144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-07-19T15:15:08.172237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:16:52 up 9 min,  0 users,  load average: 0.49, 0.33, 0.16
	Linux multinode-121443 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24] <==
	I0719 15:12:43.679570       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:12:53.679722       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:12:53.679839       1 main.go:303] handling current node
	I0719 15:12:53.679891       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:12:53.679912       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:12:53.680112       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:12:53.680141       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:13:03.677611       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:13:03.677667       1 main.go:303] handling current node
	I0719 15:13:03.677685       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:13:03.677692       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:13:03.677829       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:13:03.677859       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:13:13.678597       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:13:13.678668       1 main.go:303] handling current node
	I0719 15:13:13.678692       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:13:13.678698       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:13:13.678905       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:13:13.678930       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:13:23.676957       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:13:23.677066       1 main.go:303] handling current node
	I0719 15:13:23.677095       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:13:23.677114       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:13:23.677315       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:13:23.677339       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3] <==
	I0719 15:16:11.387760       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:16:21.394498       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:16:21.394546       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:16:21.394667       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:16:21.394673       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:16:21.395003       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:16:21.395039       1 main.go:303] handling current node
	I0719 15:16:31.387246       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:16:31.387358       1 main.go:303] handling current node
	I0719 15:16:31.387388       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:16:31.387453       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:16:31.387619       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:16:31.387688       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.2.0/24] 
	I0719 15:16:41.387088       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:16:41.387371       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.2.0/24] 
	I0719 15:16:41.387685       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:16:41.387755       1 main.go:303] handling current node
	I0719 15:16:41.387788       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:16:41.387887       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:16:51.388563       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:16:51.388620       1 main.go:303] handling current node
	I0719 15:16:51.388671       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:16:51.388679       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:16:51.388786       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:16:51.388792       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8] <==
	W0719 15:13:30.165094       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165153       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165196       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165259       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165329       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165391       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165861       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165930       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165970       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166029       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166096       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166216       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166315       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166381       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166484       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166525       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166584       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166650       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166712       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166769       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166830       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167006       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167070       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167126       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167183       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376] <==
	I0719 15:15:09.509207       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 15:15:09.509894       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 15:15:09.519320       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 15:15:09.519435       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 15:15:09.519476       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 15:15:09.519832       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 15:15:09.519889       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 15:15:09.519992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 15:15:09.525131       1 aggregator.go:165] initial CRD sync complete...
	I0719 15:15:09.525197       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 15:15:09.525223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 15:15:09.525246       1 cache.go:39] Caches are synced for autoregister controller
	I0719 15:15:09.526828       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0719 15:15:09.542491       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 15:15:09.563746       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 15:15:09.563785       1 policy_source.go:224] refreshing policies
	I0719 15:15:09.590921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 15:15:10.447055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 15:15:11.625180       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 15:15:11.773694       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 15:15:11.787734       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 15:15:11.861837       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 15:15:11.867940       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 15:15:21.853201       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 15:15:21.857028       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799] <==
	I0719 15:15:22.546007       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:15:47.060807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.206481ms"
	I0719 15:15:47.061019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.508µs"
	I0719 15:15:47.075480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.397922ms"
	I0719 15:15:47.088689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.161425ms"
	I0719 15:15:47.088761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.478µs"
	I0719 15:15:51.274917       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m02\" does not exist"
	I0719 15:15:51.297798       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m02" podCIDRs=["10.244.1.0/24"]
	I0719 15:15:52.446489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.824µs"
	I0719 15:15:53.162165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.304µs"
	I0719 15:15:53.178570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.783µs"
	I0719 15:15:53.190560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.23µs"
	I0719 15:15:53.254221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.609µs"
	I0719 15:15:53.259200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.65µs"
	I0719 15:15:53.263451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.613µs"
	I0719 15:16:11.073445       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:16:11.091898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.871µs"
	I0719 15:16:11.108064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.507µs"
	I0719 15:16:14.710900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.836319ms"
	I0719 15:16:14.711118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.162µs"
	I0719 15:16:29.238334       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:16:30.266267       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:16:30.266904       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m03\" does not exist"
	I0719 15:16:30.285598       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m03" podCIDRs=["10.244.2.0/24"]
	I0719 15:16:49.101490       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m03"
	
	
	==> kube-controller-manager [d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c] <==
	I0719 15:09:15.054245       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m02\" does not exist"
	I0719 15:09:15.069951       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m02" podCIDRs=["10.244.1.0/24"]
	I0719 15:09:15.140929       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-121443-m02"
	I0719 15:09:34.886157       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:09:36.954146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.774533ms"
	I0719 15:09:36.967723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.348543ms"
	I0719 15:09:36.967838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.963µs"
	I0719 15:09:36.982525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="164.868µs"
	I0719 15:09:40.742188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.3474ms"
	I0719 15:09:40.742484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.624µs"
	I0719 15:09:41.875270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.44058ms"
	I0719 15:09:41.875574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.304µs"
	I0719 15:10:15.865836       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m03\" does not exist"
	I0719 15:10:15.866854       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:10:15.876812       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m03" podCIDRs=["10.244.2.0/24"]
	I0719 15:10:20.162787       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-121443-m03"
	I0719 15:10:35.769104       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:11:03.796845       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:11:04.783905       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m03\" does not exist"
	I0719 15:11:04.784320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:11:04.795896       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m03" podCIDRs=["10.244.3.0/24"]
	I0719 15:11:24.660227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:12:10.220075       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m03"
	I0719 15:12:10.268830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.784234ms"
	I0719 15:12:10.268942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.248µs"
	
	
	==> kube-proxy [221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33] <==
	I0719 15:15:10.528640       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:15:10.549708       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	I0719 15:15:10.651440       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:15:10.651499       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:15:10.651529       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:15:10.656722       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:15:10.656949       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:15:10.656979       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:15:10.659339       1 config.go:192] "Starting service config controller"
	I0719 15:15:10.659381       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:15:10.659487       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:15:10.659508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:15:10.660125       1 config.go:319] "Starting node config controller"
	I0719 15:15:10.660149       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:15:10.759505       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:15:10.759611       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:15:10.760232       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6] <==
	I0719 15:08:32.599334       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:08:32.622328       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	I0719 15:08:32.670526       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:08:32.670573       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:08:32.670589       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:08:32.674670       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:08:32.674901       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:08:32.675026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:08:32.676197       1 config.go:192] "Starting service config controller"
	I0719 15:08:32.676243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:08:32.676361       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:08:32.676384       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:08:32.678918       1 config.go:319] "Starting node config controller"
	I0719 15:08:32.678954       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:08:32.776682       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:08:32.776766       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:08:32.779358       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66] <==
	E0719 15:08:14.729744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 15:08:14.728262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 15:08:14.729774       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 15:08:14.724638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:14.729808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.557892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 15:08:15.557943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 15:08:15.574341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 15:08:15.574389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 15:08:15.700606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:15.700654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.733301       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 15:08:15.733754       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:08:15.742650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 15:08:15.743025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 15:08:15.743630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 15:08:15.743703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 15:08:15.859851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:15.859964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.913469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:15.913645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.959038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 15:08:15.959136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0719 15:08:18.217576       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 15:13:30.123170       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27] <==
	I0719 15:15:07.605899       1 serving.go:380] Generated self-signed cert in-memory
	W0719 15:15:09.452675       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:15:09.452821       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:15:09.452855       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:15:09.452928       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:15:09.528285       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:15:09.531261       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:15:09.542188       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:15:09.542238       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:15:09.543018       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:15:09.543120       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 15:15:09.643248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:15:06 multinode-121443 kubelet[3102]: E0719 15:15:06.315744    3102 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.32:8443: connect: connection refused" node="multinode-121443"
	Jul 19 15:15:07 multinode-121443 kubelet[3102]: I0719 15:15:07.118039    3102 kubelet_node_status.go:73] "Attempting to register node" node="multinode-121443"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.597594    3102 apiserver.go:52] "Watching apiserver"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.602123    3102 topology_manager.go:215] "Topology Admit Handler" podUID="89ea64a0-446e-407e-af1f-be575c590316" podNamespace="kube-system" podName="kube-proxy-lfgrb"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.602507    3102 topology_manager.go:215] "Topology Admit Handler" podUID="e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63" podNamespace="kube-system" podName="kindnet-5zklk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.602682    3102 topology_manager.go:215] "Topology Admit Handler" podUID="d0b632c7-c920-41a3-92ba-97091eb2779b" podNamespace="kube-system" podName="storage-provisioner"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.602806    3102 topology_manager.go:215] "Topology Admit Handler" podUID="bf335596-f0a3-4e1f-ac5f-872595652c60" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n7t8w"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.602931    3102 topology_manager.go:215] "Topology Admit Handler" podUID="662b2304-ae04-4f1c-9246-952f88717e35" podNamespace="default" podName="busybox-fc5497c4f-9h6kk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.666873    3102 kubelet_node_status.go:112] "Node was previously registered" node="multinode-121443"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.666985    3102 kubelet_node_status.go:76] "Successfully registered node" node="multinode-121443"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.668630    3102 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.669704    3102 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.701227    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.765974    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63-lib-modules\") pod \"kindnet-5zklk\" (UID: \"e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63\") " pod="kube-system/kindnet-5zklk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766131    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d0b632c7-c920-41a3-92ba-97091eb2779b-tmp\") pod \"storage-provisioner\" (UID: \"d0b632c7-c920-41a3-92ba-97091eb2779b\") " pod="kube-system/storage-provisioner"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766211    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63-cni-cfg\") pod \"kindnet-5zklk\" (UID: \"e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63\") " pod="kube-system/kindnet-5zklk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766257    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63-xtables-lock\") pod \"kindnet-5zklk\" (UID: \"e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63\") " pod="kube-system/kindnet-5zklk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766308    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89ea64a0-446e-407e-af1f-be575c590316-lib-modules\") pod \"kube-proxy-lfgrb\" (UID: \"89ea64a0-446e-407e-af1f-be575c590316\") " pod="kube-system/kube-proxy-lfgrb"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766368    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89ea64a0-446e-407e-af1f-be575c590316-xtables-lock\") pod \"kube-proxy-lfgrb\" (UID: \"89ea64a0-446e-407e-af1f-be575c590316\") " pod="kube-system/kube-proxy-lfgrb"
	Jul 19 15:15:13 multinode-121443 kubelet[3102]: I0719 15:15:13.835113    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 19 15:16:05 multinode-121443 kubelet[3102]: E0719 15:16:05.683476    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:16:51.532337   41957 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-3847/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-121443 -n multinode-121443
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-121443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 stop
E0719 15:17:29.032294   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-121443 stop: exit status 82 (2m0.464669638s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-121443-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-121443 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-121443 status: exit status 3 (18.695240974s)

                                                
                                                
-- stdout --
	multinode-121443
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-121443-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:19:14.770585   43070 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0719 15:19:14.770619   43070 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-121443 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-121443 -n multinode-121443
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-121443 logs -n 25: (1.476545862s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443:/home/docker/cp-test_multinode-121443-m02_multinode-121443.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443 sudo cat                                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m02_multinode-121443.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03:/home/docker/cp-test_multinode-121443-m02_multinode-121443-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443-m03 sudo cat                                   | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m02_multinode-121443-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp testdata/cp-test.txt                                                | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4276887194/001/cp-test_multinode-121443-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443:/home/docker/cp-test_multinode-121443-m03_multinode-121443.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443 sudo cat                                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m03_multinode-121443.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt                       | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m02:/home/docker/cp-test_multinode-121443-m03_multinode-121443-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n                                                                 | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | multinode-121443-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-121443 ssh -n multinode-121443-m02 sudo cat                                   | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	|         | /home/docker/cp-test_multinode-121443-m03_multinode-121443-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-121443 node stop m03                                                          | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:10 UTC |
	| node    | multinode-121443 node start                                                             | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:10 UTC | 19 Jul 24 15:11 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-121443                                                                | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:11 UTC |                     |
	| stop    | -p multinode-121443                                                                     | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:11 UTC |                     |
	| start   | -p multinode-121443                                                                     | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:13 UTC | 19 Jul 24 15:16 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-121443                                                                | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:16 UTC |                     |
	| node    | multinode-121443 node delete                                                            | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:16 UTC | 19 Jul 24 15:16 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-121443 stop                                                                   | multinode-121443 | jenkins | v1.33.1 | 19 Jul 24 15:16 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:13:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:13:29.170840   40893 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:13:29.170944   40893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:13:29.170952   40893 out.go:304] Setting ErrFile to fd 2...
	I0719 15:13:29.170957   40893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:13:29.171119   40893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:13:29.171608   40893 out.go:298] Setting JSON to false
	I0719 15:13:29.172482   40893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3355,"bootTime":1721398654,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:13:29.172537   40893 start.go:139] virtualization: kvm guest
	I0719 15:13:29.174813   40893 out.go:177] * [multinode-121443] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:13:29.176156   40893 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:13:29.176156   40893 notify.go:220] Checking for updates...
	I0719 15:13:29.177561   40893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:13:29.178826   40893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:13:29.180107   40893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:13:29.181556   40893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:13:29.182857   40893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:13:29.184445   40893 config.go:182] Loaded profile config "multinode-121443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:13:29.184530   40893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:13:29.184935   40893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:13:29.184987   40893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:13:29.199530   40893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
	I0719 15:13:29.199924   40893 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:13:29.200468   40893 main.go:141] libmachine: Using API Version  1
	I0719 15:13:29.200487   40893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:13:29.200797   40893 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:13:29.200981   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:13:29.233602   40893 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:13:29.234815   40893 start.go:297] selected driver: kvm2
	I0719 15:13:29.234827   40893 start.go:901] validating driver "kvm2" against &{Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:13:29.234994   40893 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:13:29.235381   40893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:13:29.235455   40893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:13:29.249695   40893 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:13:29.250634   40893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:13:29.250711   40893 cni.go:84] Creating CNI manager for ""
	I0719 15:13:29.250725   40893 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 15:13:29.250797   40893 start.go:340] cluster config:
	{Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-121443 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:13:29.250982   40893 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:13:29.252652   40893 out.go:177] * Starting "multinode-121443" primary control-plane node in "multinode-121443" cluster
	I0719 15:13:29.253787   40893 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:13:29.253816   40893 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:13:29.253824   40893 cache.go:56] Caching tarball of preloaded images
	I0719 15:13:29.253891   40893 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:13:29.253900   40893 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:13:29.254007   40893 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/config.json ...
	I0719 15:13:29.254179   40893 start.go:360] acquireMachinesLock for multinode-121443: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:13:29.254214   40893 start.go:364] duration metric: took 19.692µs to acquireMachinesLock for "multinode-121443"
	I0719 15:13:29.254226   40893 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:13:29.254230   40893 fix.go:54] fixHost starting: 
	I0719 15:13:29.254505   40893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:13:29.254535   40893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:13:29.267847   40893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0719 15:13:29.268320   40893 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:13:29.268762   40893 main.go:141] libmachine: Using API Version  1
	I0719 15:13:29.268785   40893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:13:29.269077   40893 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:13:29.269245   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:13:29.269380   40893 main.go:141] libmachine: (multinode-121443) Calling .GetState
	I0719 15:13:29.270923   40893 fix.go:112] recreateIfNeeded on multinode-121443: state=Running err=<nil>
	W0719 15:13:29.270939   40893 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:13:29.273568   40893 out.go:177] * Updating the running kvm2 "multinode-121443" VM ...
	I0719 15:13:29.274927   40893 machine.go:94] provisionDockerMachine start ...
	I0719 15:13:29.274945   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:13:29.275107   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.277439   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.277929   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.277950   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.278125   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.278288   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.278421   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.278554   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.278720   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.278892   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.278901   40893 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:13:29.399808   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-121443
	
	I0719 15:13:29.399835   40893 main.go:141] libmachine: (multinode-121443) Calling .GetMachineName
	I0719 15:13:29.400065   40893 buildroot.go:166] provisioning hostname "multinode-121443"
	I0719 15:13:29.400091   40893 main.go:141] libmachine: (multinode-121443) Calling .GetMachineName
	I0719 15:13:29.400245   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.402935   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.403248   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.403275   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.403418   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.403580   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.403721   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.403835   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.403978   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.404140   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.404151   40893 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-121443 && echo "multinode-121443" | sudo tee /etc/hostname
	I0719 15:13:29.542519   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-121443
	
	I0719 15:13:29.542544   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.544998   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.545334   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.545362   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.545480   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.545643   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.545812   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.545922   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.546086   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.546324   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.546344   40893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-121443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-121443/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-121443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:13:29.667833   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:13:29.667861   40893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:13:29.667898   40893 buildroot.go:174] setting up certificates
	I0719 15:13:29.667908   40893 provision.go:84] configureAuth start
	I0719 15:13:29.667927   40893 main.go:141] libmachine: (multinode-121443) Calling .GetMachineName
	I0719 15:13:29.668169   40893 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:13:29.670421   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.670738   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.670769   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.670933   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.673264   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.673599   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.673630   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.673758   40893 provision.go:143] copyHostCerts
	I0719 15:13:29.673794   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:13:29.673834   40893 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:13:29.673846   40893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:13:29.673915   40893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:13:29.674001   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:13:29.674019   40893 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:13:29.674025   40893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:13:29.674050   40893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:13:29.674107   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:13:29.674122   40893 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:13:29.674128   40893 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:13:29.674148   40893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:13:29.674206   40893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.multinode-121443 san=[127.0.0.1 192.168.39.32 localhost minikube multinode-121443]
	I0719 15:13:29.827902   40893 provision.go:177] copyRemoteCerts
	I0719 15:13:29.827952   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:13:29.827973   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.830681   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.831054   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.831087   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.831233   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.831396   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.831567   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.831683   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:13:29.917142   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 15:13:29.917199   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:13:29.942650   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 15:13:29.942730   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:13:29.967966   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 15:13:29.968026   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:13:29.991024   40893 provision.go:87] duration metric: took 323.103999ms to configureAuth
	I0719 15:13:29.991046   40893 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:13:29.991253   40893 config.go:182] Loaded profile config "multinode-121443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:13:29.991347   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:13:29.993785   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.994108   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:13:29.994133   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:13:29.994331   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:13:29.994514   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.994660   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:13:29.994790   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:13:29.995048   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:13:29.995257   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:13:29.995273   40893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:15:00.847984   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:15:00.848012   40893 machine.go:97] duration metric: took 1m31.573071971s to provisionDockerMachine
	I0719 15:15:00.848024   40893 start.go:293] postStartSetup for "multinode-121443" (driver="kvm2")
	I0719 15:15:00.848035   40893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:15:00.848051   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:00.848406   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:15:00.848431   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:00.851790   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.852267   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:00.852288   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.852506   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:00.852699   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:00.852824   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:00.852979   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:15:00.942220   40893 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:15:00.946418   40893 command_runner.go:130] > NAME=Buildroot
	I0719 15:15:00.946436   40893 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 15:15:00.946440   40893 command_runner.go:130] > ID=buildroot
	I0719 15:15:00.946445   40893 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 15:15:00.946449   40893 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 15:15:00.946482   40893 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:15:00.946496   40893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:15:00.946544   40893 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:15:00.946609   40893 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:15:00.946620   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /etc/ssl/certs/110122.pem
	I0719 15:15:00.946712   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:15:00.956222   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:15:00.980132   40893 start.go:296] duration metric: took 132.096007ms for postStartSetup
	I0719 15:15:00.980169   40893 fix.go:56] duration metric: took 1m31.725938844s for fixHost
	I0719 15:15:00.980188   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:00.982758   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.983064   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:00.983102   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:00.983354   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:00.983540   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:00.983698   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:00.983844   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:00.983993   40893 main.go:141] libmachine: Using SSH client type: native
	I0719 15:15:00.984202   40893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0719 15:15:00.984253   40893 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:15:01.095372   40893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721402101.074141469
	
	I0719 15:15:01.095393   40893 fix.go:216] guest clock: 1721402101.074141469
	I0719 15:15:01.095402   40893 fix.go:229] Guest: 2024-07-19 15:15:01.074141469 +0000 UTC Remote: 2024-07-19 15:15:00.980173586 +0000 UTC m=+91.842218458 (delta=93.967883ms)
	I0719 15:15:01.095426   40893 fix.go:200] guest clock delta is within tolerance: 93.967883ms
	I0719 15:15:01.095432   40893 start.go:83] releasing machines lock for "multinode-121443", held for 1m31.841209887s
	I0719 15:15:01.095457   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.095740   40893 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:15:01.098130   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.098505   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:01.098540   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.098720   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.099321   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.099489   40893 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:15:01.099577   40893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:15:01.099622   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:01.099723   40893 ssh_runner.go:195] Run: cat /version.json
	I0719 15:15:01.099747   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:15:01.102017   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.102439   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.102471   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:01.102519   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.102694   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:01.102841   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:01.102989   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:01.103044   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:01.103067   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:01.103108   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:15:01.103249   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:15:01.103398   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:15:01.103565   40893 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:15:01.103690   40893 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:15:01.183498   40893 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 15:15:01.183793   40893 ssh_runner.go:195] Run: systemctl --version
	I0719 15:15:01.212288   40893 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 15:15:01.212343   40893 command_runner.go:130] > systemd 252 (252)
	I0719 15:15:01.212364   40893 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 15:15:01.212437   40893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:15:01.388981   40893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 15:15:01.395406   40893 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 15:15:01.395468   40893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:15:01.395532   40893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:15:01.405977   40893 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 15:15:01.406001   40893 start.go:495] detecting cgroup driver to use...
	I0719 15:15:01.406072   40893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:15:01.423271   40893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:15:01.437880   40893 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:15:01.437934   40893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:15:01.453383   40893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:15:01.467872   40893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:15:01.626285   40893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:15:01.781158   40893 docker.go:233] disabling docker service ...
	I0719 15:15:01.781231   40893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:15:01.801679   40893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:15:01.817234   40893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:15:01.970187   40893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:15:02.124945   40893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:15:02.140269   40893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:15:02.158982   40893 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0719 15:15:02.159033   40893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:15:02.159090   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.170246   40893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:15:02.170326   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.181405   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.192141   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.202822   40893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:15:02.213538   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.224654   40893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.235598   40893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:15:02.246312   40893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:15:02.256183   40893 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 15:15:02.256275   40893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:15:02.265954   40893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:15:02.402781   40893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:15:02.744299   40893 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:15:02.744360   40893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:15:02.750746   40893 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0719 15:15:02.750774   40893 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 15:15:02.750782   40893 command_runner.go:130] > Device: 0,22	Inode: 1321        Links: 1
	I0719 15:15:02.750791   40893 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 15:15:02.750797   40893 command_runner.go:130] > Access: 2024-07-19 15:15:02.639302039 +0000
	I0719 15:15:02.750806   40893 command_runner.go:130] > Modify: 2024-07-19 15:15:02.606301236 +0000
	I0719 15:15:02.750813   40893 command_runner.go:130] > Change: 2024-07-19 15:15:02.606301236 +0000
	I0719 15:15:02.750839   40893 command_runner.go:130] >  Birth: -
	I0719 15:15:02.750871   40893 start.go:563] Will wait 60s for crictl version
	I0719 15:15:02.750933   40893 ssh_runner.go:195] Run: which crictl
	I0719 15:15:02.762073   40893 command_runner.go:130] > /usr/bin/crictl
	I0719 15:15:02.762554   40893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:15:02.801429   40893 command_runner.go:130] > Version:  0.1.0
	I0719 15:15:02.801451   40893 command_runner.go:130] > RuntimeName:  cri-o
	I0719 15:15:02.801456   40893 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0719 15:15:02.801461   40893 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 15:15:02.802437   40893 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:15:02.802535   40893 ssh_runner.go:195] Run: crio --version
	I0719 15:15:02.832276   40893 command_runner.go:130] > crio version 1.29.1
	I0719 15:15:02.832307   40893 command_runner.go:130] > Version:        1.29.1
	I0719 15:15:02.832316   40893 command_runner.go:130] > GitCommit:      unknown
	I0719 15:15:02.832322   40893 command_runner.go:130] > GitCommitDate:  unknown
	I0719 15:15:02.832328   40893 command_runner.go:130] > GitTreeState:   clean
	I0719 15:15:02.832337   40893 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 15:15:02.832343   40893 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 15:15:02.832350   40893 command_runner.go:130] > Compiler:       gc
	I0719 15:15:02.832359   40893 command_runner.go:130] > Platform:       linux/amd64
	I0719 15:15:02.832366   40893 command_runner.go:130] > Linkmode:       dynamic
	I0719 15:15:02.832376   40893 command_runner.go:130] > BuildTags:      
	I0719 15:15:02.832390   40893 command_runner.go:130] >   containers_image_ostree_stub
	I0719 15:15:02.832399   40893 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 15:15:02.832406   40893 command_runner.go:130] >   btrfs_noversion
	I0719 15:15:02.832415   40893 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 15:15:02.832423   40893 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 15:15:02.832429   40893 command_runner.go:130] >   seccomp
	I0719 15:15:02.832435   40893 command_runner.go:130] > LDFlags:          unknown
	I0719 15:15:02.832443   40893 command_runner.go:130] > SeccompEnabled:   true
	I0719 15:15:02.832447   40893 command_runner.go:130] > AppArmorEnabled:  false
	I0719 15:15:02.832517   40893 ssh_runner.go:195] Run: crio --version
	I0719 15:15:02.859990   40893 command_runner.go:130] > crio version 1.29.1
	I0719 15:15:02.860016   40893 command_runner.go:130] > Version:        1.29.1
	I0719 15:15:02.860024   40893 command_runner.go:130] > GitCommit:      unknown
	I0719 15:15:02.860030   40893 command_runner.go:130] > GitCommitDate:  unknown
	I0719 15:15:02.860037   40893 command_runner.go:130] > GitTreeState:   clean
	I0719 15:15:02.860057   40893 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 15:15:02.860063   40893 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 15:15:02.860073   40893 command_runner.go:130] > Compiler:       gc
	I0719 15:15:02.860081   40893 command_runner.go:130] > Platform:       linux/amd64
	I0719 15:15:02.860090   40893 command_runner.go:130] > Linkmode:       dynamic
	I0719 15:15:02.860097   40893 command_runner.go:130] > BuildTags:      
	I0719 15:15:02.860108   40893 command_runner.go:130] >   containers_image_ostree_stub
	I0719 15:15:02.860116   40893 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 15:15:02.860123   40893 command_runner.go:130] >   btrfs_noversion
	I0719 15:15:02.860133   40893 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 15:15:02.860143   40893 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 15:15:02.860150   40893 command_runner.go:130] >   seccomp
	I0719 15:15:02.860159   40893 command_runner.go:130] > LDFlags:          unknown
	I0719 15:15:02.860165   40893 command_runner.go:130] > SeccompEnabled:   true
	I0719 15:15:02.860174   40893 command_runner.go:130] > AppArmorEnabled:  false
	I0719 15:15:02.863442   40893 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:15:02.864846   40893 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:15:02.867471   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:02.867916   40893 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:15:02.867944   40893 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:15:02.868121   40893 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:15:02.872406   40893 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0719 15:15:02.872477   40893 kubeadm.go:883] updating cluster {Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:15:02.872590   40893 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:15:02.872640   40893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:15:02.915715   40893 command_runner.go:130] > {
	I0719 15:15:02.915738   40893 command_runner.go:130] >   "images": [
	I0719 15:15:02.915744   40893 command_runner.go:130] >     {
	I0719 15:15:02.915761   40893 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 15:15:02.915767   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.915775   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 15:15:02.915781   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915795   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.915806   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 15:15:02.915817   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 15:15:02.915822   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915828   40893 command_runner.go:130] >       "size": "87165492",
	I0719 15:15:02.915832   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.915838   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.915847   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.915860   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.915869   40893 command_runner.go:130] >     },
	I0719 15:15:02.915875   40893 command_runner.go:130] >     {
	I0719 15:15:02.915887   40893 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 15:15:02.915901   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.915912   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 15:15:02.915918   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915925   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.915936   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 15:15:02.915952   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 15:15:02.915961   40893 command_runner.go:130] >       ],
	I0719 15:15:02.915968   40893 command_runner.go:130] >       "size": "1363676",
	I0719 15:15:02.915976   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.915988   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.915998   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916007   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916014   40893 command_runner.go:130] >     },
	I0719 15:15:02.916022   40893 command_runner.go:130] >     {
	I0719 15:15:02.916032   40893 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 15:15:02.916042   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916051   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 15:15:02.916057   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916066   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916081   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 15:15:02.916097   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 15:15:02.916105   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916113   40893 command_runner.go:130] >       "size": "31470524",
	I0719 15:15:02.916122   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.916137   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916146   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916153   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916161   40893 command_runner.go:130] >     },
	I0719 15:15:02.916167   40893 command_runner.go:130] >     {
	I0719 15:15:02.916181   40893 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 15:15:02.916191   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916202   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 15:15:02.916209   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916217   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916230   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 15:15:02.916253   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 15:15:02.916261   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916270   40893 command_runner.go:130] >       "size": "61245718",
	I0719 15:15:02.916279   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.916288   40893 command_runner.go:130] >       "username": "nonroot",
	I0719 15:15:02.916295   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916304   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916310   40893 command_runner.go:130] >     },
	I0719 15:15:02.916318   40893 command_runner.go:130] >     {
	I0719 15:15:02.916329   40893 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 15:15:02.916338   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916348   40893 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 15:15:02.916357   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916364   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916378   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 15:15:02.916393   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 15:15:02.916401   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916408   40893 command_runner.go:130] >       "size": "150779692",
	I0719 15:15:02.916417   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.916424   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.916432   40893 command_runner.go:130] >       },
	I0719 15:15:02.916439   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916448   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916455   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916463   40893 command_runner.go:130] >     },
	I0719 15:15:02.916478   40893 command_runner.go:130] >     {
	I0719 15:15:02.916490   40893 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 15:15:02.916499   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916510   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 15:15:02.916518   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916526   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916541   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 15:15:02.916556   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 15:15:02.916564   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916572   40893 command_runner.go:130] >       "size": "117609954",
	I0719 15:15:02.916580   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.916588   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.916596   40893 command_runner.go:130] >       },
	I0719 15:15:02.916603   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916611   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916617   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916621   40893 command_runner.go:130] >     },
	I0719 15:15:02.916626   40893 command_runner.go:130] >     {
	I0719 15:15:02.916636   40893 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 15:15:02.916645   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916656   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 15:15:02.916662   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916670   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916686   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 15:15:02.916702   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 15:15:02.916710   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916718   40893 command_runner.go:130] >       "size": "112198984",
	I0719 15:15:02.916726   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.916733   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.916741   40893 command_runner.go:130] >       },
	I0719 15:15:02.916748   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916757   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916764   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916771   40893 command_runner.go:130] >     },
	I0719 15:15:02.916777   40893 command_runner.go:130] >     {
	I0719 15:15:02.916790   40893 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 15:15:02.916808   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916819   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 15:15:02.916827   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916834   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916867   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 15:15:02.916881   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 15:15:02.916887   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916894   40893 command_runner.go:130] >       "size": "85953945",
	I0719 15:15:02.916903   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.916910   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.916917   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.916927   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.916931   40893 command_runner.go:130] >     },
	I0719 15:15:02.916935   40893 command_runner.go:130] >     {
	I0719 15:15:02.916944   40893 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 15:15:02.916952   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.916961   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 15:15:02.916967   40893 command_runner.go:130] >       ],
	I0719 15:15:02.916974   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.916989   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 15:15:02.917004   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 15:15:02.917013   40893 command_runner.go:130] >       ],
	I0719 15:15:02.917020   40893 command_runner.go:130] >       "size": "63051080",
	I0719 15:15:02.917030   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.917039   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.917045   40893 command_runner.go:130] >       },
	I0719 15:15:02.917055   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.917062   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.917071   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.917079   40893 command_runner.go:130] >     },
	I0719 15:15:02.917086   40893 command_runner.go:130] >     {
	I0719 15:15:02.917097   40893 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 15:15:02.917105   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.917114   40893 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 15:15:02.917122   40893 command_runner.go:130] >       ],
	I0719 15:15:02.917129   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.917150   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 15:15:02.917165   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 15:15:02.917172   40893 command_runner.go:130] >       ],
	I0719 15:15:02.917180   40893 command_runner.go:130] >       "size": "750414",
	I0719 15:15:02.917188   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.917197   40893 command_runner.go:130] >         "value": "65535"
	I0719 15:15:02.917202   40893 command_runner.go:130] >       },
	I0719 15:15:02.917209   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.917219   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.917229   40893 command_runner.go:130] >       "pinned": true
	I0719 15:15:02.917236   40893 command_runner.go:130] >     }
	I0719 15:15:02.917242   40893 command_runner.go:130] >   ]
	I0719 15:15:02.917247   40893 command_runner.go:130] > }
	I0719 15:15:02.917421   40893 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:15:02.917434   40893 crio.go:433] Images already preloaded, skipping extraction
	I0719 15:15:02.917525   40893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:15:02.951905   40893 command_runner.go:130] > {
	I0719 15:15:02.951924   40893 command_runner.go:130] >   "images": [
	I0719 15:15:02.951928   40893 command_runner.go:130] >     {
	I0719 15:15:02.951936   40893 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 15:15:02.951941   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.951947   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 15:15:02.951950   40893 command_runner.go:130] >       ],
	I0719 15:15:02.951954   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.951962   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 15:15:02.951969   40893 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 15:15:02.951972   40893 command_runner.go:130] >       ],
	I0719 15:15:02.951976   40893 command_runner.go:130] >       "size": "87165492",
	I0719 15:15:02.951980   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.951984   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.951992   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.951998   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952001   40893 command_runner.go:130] >     },
	I0719 15:15:02.952004   40893 command_runner.go:130] >     {
	I0719 15:15:02.952009   40893 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 15:15:02.952029   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952037   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 15:15:02.952041   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952045   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952052   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 15:15:02.952061   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 15:15:02.952066   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952070   40893 command_runner.go:130] >       "size": "1363676",
	I0719 15:15:02.952076   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952084   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952101   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952105   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952110   40893 command_runner.go:130] >     },
	I0719 15:15:02.952114   40893 command_runner.go:130] >     {
	I0719 15:15:02.952120   40893 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 15:15:02.952126   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952131   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 15:15:02.952145   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952151   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952161   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 15:15:02.952170   40893 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 15:15:02.952176   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952181   40893 command_runner.go:130] >       "size": "31470524",
	I0719 15:15:02.952187   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952191   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952197   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952201   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952206   40893 command_runner.go:130] >     },
	I0719 15:15:02.952209   40893 command_runner.go:130] >     {
	I0719 15:15:02.952217   40893 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 15:15:02.952225   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952230   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 15:15:02.952235   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952239   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952249   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 15:15:02.952263   40893 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 15:15:02.952274   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952280   40893 command_runner.go:130] >       "size": "61245718",
	I0719 15:15:02.952286   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952295   40893 command_runner.go:130] >       "username": "nonroot",
	I0719 15:15:02.952301   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952305   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952311   40893 command_runner.go:130] >     },
	I0719 15:15:02.952314   40893 command_runner.go:130] >     {
	I0719 15:15:02.952323   40893 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 15:15:02.952329   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952333   40893 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 15:15:02.952337   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952341   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952350   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 15:15:02.952359   40893 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 15:15:02.952364   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952368   40893 command_runner.go:130] >       "size": "150779692",
	I0719 15:15:02.952374   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952377   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952381   40893 command_runner.go:130] >       },
	I0719 15:15:02.952387   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952390   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952396   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952399   40893 command_runner.go:130] >     },
	I0719 15:15:02.952405   40893 command_runner.go:130] >     {
	I0719 15:15:02.952411   40893 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 15:15:02.952416   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952421   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 15:15:02.952427   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952431   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952439   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 15:15:02.952448   40893 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 15:15:02.952458   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952464   40893 command_runner.go:130] >       "size": "117609954",
	I0719 15:15:02.952468   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952475   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952482   40893 command_runner.go:130] >       },
	I0719 15:15:02.952488   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952492   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952498   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952501   40893 command_runner.go:130] >     },
	I0719 15:15:02.952506   40893 command_runner.go:130] >     {
	I0719 15:15:02.952512   40893 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 15:15:02.952518   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952523   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 15:15:02.952529   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952533   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952542   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 15:15:02.952551   40893 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 15:15:02.952559   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952563   40893 command_runner.go:130] >       "size": "112198984",
	I0719 15:15:02.952568   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952572   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952577   40893 command_runner.go:130] >       },
	I0719 15:15:02.952581   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952587   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952591   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952598   40893 command_runner.go:130] >     },
	I0719 15:15:02.952602   40893 command_runner.go:130] >     {
	I0719 15:15:02.952608   40893 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 15:15:02.952614   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952620   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 15:15:02.952625   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952629   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952678   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 15:15:02.952690   40893 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 15:15:02.952694   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952698   40893 command_runner.go:130] >       "size": "85953945",
	I0719 15:15:02.952704   40893 command_runner.go:130] >       "uid": null,
	I0719 15:15:02.952708   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952713   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952717   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952727   40893 command_runner.go:130] >     },
	I0719 15:15:02.952733   40893 command_runner.go:130] >     {
	I0719 15:15:02.952739   40893 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 15:15:02.952745   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952749   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 15:15:02.952752   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952756   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952765   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 15:15:02.952774   40893 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 15:15:02.952779   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952783   40893 command_runner.go:130] >       "size": "63051080",
	I0719 15:15:02.952789   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952792   40893 command_runner.go:130] >         "value": "0"
	I0719 15:15:02.952798   40893 command_runner.go:130] >       },
	I0719 15:15:02.952802   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952808   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952812   40893 command_runner.go:130] >       "pinned": false
	I0719 15:15:02.952817   40893 command_runner.go:130] >     },
	I0719 15:15:02.952820   40893 command_runner.go:130] >     {
	I0719 15:15:02.952828   40893 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 15:15:02.952832   40893 command_runner.go:130] >       "repoTags": [
	I0719 15:15:02.952838   40893 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 15:15:02.952842   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952852   40893 command_runner.go:130] >       "repoDigests": [
	I0719 15:15:02.952859   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 15:15:02.952869   40893 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 15:15:02.952875   40893 command_runner.go:130] >       ],
	I0719 15:15:02.952879   40893 command_runner.go:130] >       "size": "750414",
	I0719 15:15:02.952885   40893 command_runner.go:130] >       "uid": {
	I0719 15:15:02.952889   40893 command_runner.go:130] >         "value": "65535"
	I0719 15:15:02.952892   40893 command_runner.go:130] >       },
	I0719 15:15:02.952896   40893 command_runner.go:130] >       "username": "",
	I0719 15:15:02.952901   40893 command_runner.go:130] >       "spec": null,
	I0719 15:15:02.952905   40893 command_runner.go:130] >       "pinned": true
	I0719 15:15:02.952910   40893 command_runner.go:130] >     }
	I0719 15:15:02.952914   40893 command_runner.go:130] >   ]
	I0719 15:15:02.952923   40893 command_runner.go:130] > }
	I0719 15:15:02.955123   40893 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:15:02.955141   40893 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:15:02.955150   40893 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.30.3 crio true true} ...
	I0719 15:15:02.955268   40893 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-121443 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:15:02.955356   40893 ssh_runner.go:195] Run: crio config
	I0719 15:15:02.988947   40893 command_runner.go:130] ! time="2024-07-19 15:15:02.967781103Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0719 15:15:02.995547   40893 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0719 15:15:03.001800   40893 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0719 15:15:03.001829   40893 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0719 15:15:03.001835   40893 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0719 15:15:03.001839   40893 command_runner.go:130] > #
	I0719 15:15:03.001845   40893 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0719 15:15:03.001852   40893 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0719 15:15:03.001857   40893 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0719 15:15:03.001866   40893 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0719 15:15:03.001870   40893 command_runner.go:130] > # reload'.
	I0719 15:15:03.001876   40893 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0719 15:15:03.001881   40893 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0719 15:15:03.001890   40893 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0719 15:15:03.001897   40893 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0719 15:15:03.001911   40893 command_runner.go:130] > [crio]
	I0719 15:15:03.001921   40893 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0719 15:15:03.001925   40893 command_runner.go:130] > # containers images, in this directory.
	I0719 15:15:03.001930   40893 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0719 15:15:03.001942   40893 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0719 15:15:03.001949   40893 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0719 15:15:03.001962   40893 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0719 15:15:03.001968   40893 command_runner.go:130] > # imagestore = ""
	I0719 15:15:03.001974   40893 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0719 15:15:03.001982   40893 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0719 15:15:03.001986   40893 command_runner.go:130] > storage_driver = "overlay"
	I0719 15:15:03.001993   40893 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0719 15:15:03.001998   40893 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0719 15:15:03.002004   40893 command_runner.go:130] > storage_option = [
	I0719 15:15:03.002008   40893 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0719 15:15:03.002012   40893 command_runner.go:130] > ]
	I0719 15:15:03.002021   40893 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0719 15:15:03.002034   40893 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0719 15:15:03.002040   40893 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0719 15:15:03.002046   40893 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0719 15:15:03.002054   40893 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0719 15:15:03.002064   40893 command_runner.go:130] > # always happen on a node reboot
	I0719 15:15:03.002071   40893 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0719 15:15:03.002088   40893 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0719 15:15:03.002096   40893 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0719 15:15:03.002101   40893 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0719 15:15:03.002105   40893 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0719 15:15:03.002112   40893 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0719 15:15:03.002121   40893 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0719 15:15:03.002127   40893 command_runner.go:130] > # internal_wipe = true
	I0719 15:15:03.002134   40893 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0719 15:15:03.002142   40893 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0719 15:15:03.002146   40893 command_runner.go:130] > # internal_repair = false
	I0719 15:15:03.002153   40893 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0719 15:15:03.002158   40893 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0719 15:15:03.002165   40893 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0719 15:15:03.002171   40893 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0719 15:15:03.002180   40893 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0719 15:15:03.002186   40893 command_runner.go:130] > [crio.api]
	I0719 15:15:03.002191   40893 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0719 15:15:03.002198   40893 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0719 15:15:03.002202   40893 command_runner.go:130] > # IP address on which the stream server will listen.
	I0719 15:15:03.002207   40893 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0719 15:15:03.002213   40893 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0719 15:15:03.002219   40893 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0719 15:15:03.002223   40893 command_runner.go:130] > # stream_port = "0"
	I0719 15:15:03.002228   40893 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0719 15:15:03.002247   40893 command_runner.go:130] > # stream_enable_tls = false
	I0719 15:15:03.002257   40893 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0719 15:15:03.002265   40893 command_runner.go:130] > # stream_idle_timeout = ""
	I0719 15:15:03.002271   40893 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0719 15:15:03.002279   40893 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0719 15:15:03.002282   40893 command_runner.go:130] > # minutes.
	I0719 15:15:03.002291   40893 command_runner.go:130] > # stream_tls_cert = ""
	I0719 15:15:03.002299   40893 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0719 15:15:03.002305   40893 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0719 15:15:03.002311   40893 command_runner.go:130] > # stream_tls_key = ""
	I0719 15:15:03.002317   40893 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0719 15:15:03.002325   40893 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0719 15:15:03.002343   40893 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0719 15:15:03.002349   40893 command_runner.go:130] > # stream_tls_ca = ""
	I0719 15:15:03.002356   40893 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 15:15:03.002363   40893 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0719 15:15:03.002370   40893 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 15:15:03.002376   40893 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0719 15:15:03.002382   40893 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0719 15:15:03.002390   40893 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0719 15:15:03.002394   40893 command_runner.go:130] > [crio.runtime]
	I0719 15:15:03.002401   40893 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0719 15:15:03.002408   40893 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0719 15:15:03.002412   40893 command_runner.go:130] > # "nofile=1024:2048"
	I0719 15:15:03.002420   40893 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0719 15:15:03.002424   40893 command_runner.go:130] > # default_ulimits = [
	I0719 15:15:03.002427   40893 command_runner.go:130] > # ]
	I0719 15:15:03.002435   40893 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0719 15:15:03.002439   40893 command_runner.go:130] > # no_pivot = false
	I0719 15:15:03.002448   40893 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0719 15:15:03.002456   40893 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0719 15:15:03.002460   40893 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0719 15:15:03.002467   40893 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0719 15:15:03.002472   40893 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0719 15:15:03.002481   40893 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 15:15:03.002487   40893 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0719 15:15:03.002491   40893 command_runner.go:130] > # Cgroup setting for conmon
	I0719 15:15:03.002498   40893 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0719 15:15:03.002504   40893 command_runner.go:130] > conmon_cgroup = "pod"
	I0719 15:15:03.002509   40893 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0719 15:15:03.002516   40893 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0719 15:15:03.002522   40893 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 15:15:03.002534   40893 command_runner.go:130] > conmon_env = [
	I0719 15:15:03.002542   40893 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 15:15:03.002546   40893 command_runner.go:130] > ]
	I0719 15:15:03.002552   40893 command_runner.go:130] > # Additional environment variables to set for all the
	I0719 15:15:03.002558   40893 command_runner.go:130] > # containers. These are overridden if set in the
	I0719 15:15:03.002564   40893 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0719 15:15:03.002569   40893 command_runner.go:130] > # default_env = [
	I0719 15:15:03.002572   40893 command_runner.go:130] > # ]
	I0719 15:15:03.002577   40893 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0719 15:15:03.002586   40893 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0719 15:15:03.002590   40893 command_runner.go:130] > # selinux = false
	I0719 15:15:03.002595   40893 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0719 15:15:03.002603   40893 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0719 15:15:03.002609   40893 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0719 15:15:03.002615   40893 command_runner.go:130] > # seccomp_profile = ""
	I0719 15:15:03.002620   40893 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0719 15:15:03.002628   40893 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0719 15:15:03.002633   40893 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0719 15:15:03.002640   40893 command_runner.go:130] > # which might increase security.
	I0719 15:15:03.002644   40893 command_runner.go:130] > # This option is currently deprecated,
	I0719 15:15:03.002653   40893 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0719 15:15:03.002661   40893 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0719 15:15:03.002667   40893 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0719 15:15:03.002675   40893 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0719 15:15:03.002683   40893 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0719 15:15:03.002691   40893 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0719 15:15:03.002696   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.002702   40893 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0719 15:15:03.002707   40893 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0719 15:15:03.002714   40893 command_runner.go:130] > # the cgroup blockio controller.
	I0719 15:15:03.002718   40893 command_runner.go:130] > # blockio_config_file = ""
	I0719 15:15:03.002726   40893 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0719 15:15:03.002730   40893 command_runner.go:130] > # blockio parameters.
	I0719 15:15:03.002736   40893 command_runner.go:130] > # blockio_reload = false
	I0719 15:15:03.002742   40893 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0719 15:15:03.002748   40893 command_runner.go:130] > # irqbalance daemon.
	I0719 15:15:03.002757   40893 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0719 15:15:03.002765   40893 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0719 15:15:03.002772   40893 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0719 15:15:03.002780   40893 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0719 15:15:03.002788   40893 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0719 15:15:03.002795   40893 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0719 15:15:03.002801   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.002806   40893 command_runner.go:130] > # rdt_config_file = ""
	I0719 15:15:03.002813   40893 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0719 15:15:03.002816   40893 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0719 15:15:03.002844   40893 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0719 15:15:03.002851   40893 command_runner.go:130] > # separate_pull_cgroup = ""
	I0719 15:15:03.002857   40893 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0719 15:15:03.002865   40893 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0719 15:15:03.002868   40893 command_runner.go:130] > # will be added.
	I0719 15:15:03.002874   40893 command_runner.go:130] > # default_capabilities = [
	I0719 15:15:03.002878   40893 command_runner.go:130] > # 	"CHOWN",
	I0719 15:15:03.002884   40893 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0719 15:15:03.002887   40893 command_runner.go:130] > # 	"FSETID",
	I0719 15:15:03.002892   40893 command_runner.go:130] > # 	"FOWNER",
	I0719 15:15:03.002895   40893 command_runner.go:130] > # 	"SETGID",
	I0719 15:15:03.002906   40893 command_runner.go:130] > # 	"SETUID",
	I0719 15:15:03.002911   40893 command_runner.go:130] > # 	"SETPCAP",
	I0719 15:15:03.002915   40893 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0719 15:15:03.002921   40893 command_runner.go:130] > # 	"KILL",
	I0719 15:15:03.002924   40893 command_runner.go:130] > # ]
	I0719 15:15:03.002935   40893 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0719 15:15:03.002942   40893 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0719 15:15:03.002950   40893 command_runner.go:130] > # add_inheritable_capabilities = false
	I0719 15:15:03.002957   40893 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0719 15:15:03.002965   40893 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 15:15:03.002969   40893 command_runner.go:130] > default_sysctls = [
	I0719 15:15:03.002975   40893 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0719 15:15:03.002978   40893 command_runner.go:130] > ]
	I0719 15:15:03.002982   40893 command_runner.go:130] > # List of devices on the host that a
	I0719 15:15:03.002990   40893 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0719 15:15:03.002998   40893 command_runner.go:130] > # allowed_devices = [
	I0719 15:15:03.003004   40893 command_runner.go:130] > # 	"/dev/fuse",
	I0719 15:15:03.003007   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003014   40893 command_runner.go:130] > # List of additional devices. specified as
	I0719 15:15:03.003021   40893 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0719 15:15:03.003028   40893 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0719 15:15:03.003034   40893 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 15:15:03.003040   40893 command_runner.go:130] > # additional_devices = [
	I0719 15:15:03.003043   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003050   40893 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0719 15:15:03.003054   40893 command_runner.go:130] > # cdi_spec_dirs = [
	I0719 15:15:03.003059   40893 command_runner.go:130] > # 	"/etc/cdi",
	I0719 15:15:03.003063   40893 command_runner.go:130] > # 	"/var/run/cdi",
	I0719 15:15:03.003066   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003071   40893 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0719 15:15:03.003079   40893 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0719 15:15:03.003083   40893 command_runner.go:130] > # Defaults to false.
	I0719 15:15:03.003089   40893 command_runner.go:130] > # device_ownership_from_security_context = false
	I0719 15:15:03.003095   40893 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0719 15:15:03.003102   40893 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0719 15:15:03.003106   40893 command_runner.go:130] > # hooks_dir = [
	I0719 15:15:03.003112   40893 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0719 15:15:03.003115   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003120   40893 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0719 15:15:03.003128   40893 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0719 15:15:03.003133   40893 command_runner.go:130] > # its default mounts from the following two files:
	I0719 15:15:03.003138   40893 command_runner.go:130] > #
	I0719 15:15:03.003144   40893 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0719 15:15:03.003151   40893 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0719 15:15:03.003157   40893 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0719 15:15:03.003161   40893 command_runner.go:130] > #
	I0719 15:15:03.003166   40893 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0719 15:15:03.003174   40893 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0719 15:15:03.003182   40893 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0719 15:15:03.003189   40893 command_runner.go:130] > #      only add mounts it finds in this file.
	I0719 15:15:03.003193   40893 command_runner.go:130] > #
	I0719 15:15:03.003201   40893 command_runner.go:130] > # default_mounts_file = ""
	I0719 15:15:03.003208   40893 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0719 15:15:03.003214   40893 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0719 15:15:03.003220   40893 command_runner.go:130] > pids_limit = 1024
	I0719 15:15:03.003229   40893 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0719 15:15:03.003237   40893 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0719 15:15:03.003246   40893 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0719 15:15:03.003253   40893 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0719 15:15:03.003259   40893 command_runner.go:130] > # log_size_max = -1
	I0719 15:15:03.003265   40893 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0719 15:15:03.003269   40893 command_runner.go:130] > # log_to_journald = false
	I0719 15:15:03.003277   40893 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0719 15:15:03.003281   40893 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0719 15:15:03.003287   40893 command_runner.go:130] > # Path to directory for container attach sockets.
	I0719 15:15:03.003292   40893 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0719 15:15:03.003299   40893 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0719 15:15:03.003303   40893 command_runner.go:130] > # bind_mount_prefix = ""
	I0719 15:15:03.003310   40893 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0719 15:15:03.003314   40893 command_runner.go:130] > # read_only = false
	I0719 15:15:03.003322   40893 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0719 15:15:03.003328   40893 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0719 15:15:03.003333   40893 command_runner.go:130] > # live configuration reload.
	I0719 15:15:03.003337   40893 command_runner.go:130] > # log_level = "info"
	I0719 15:15:03.003342   40893 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0719 15:15:03.003349   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.003353   40893 command_runner.go:130] > # log_filter = ""
	I0719 15:15:03.003361   40893 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0719 15:15:03.003370   40893 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0719 15:15:03.003374   40893 command_runner.go:130] > # separated by comma.
	I0719 15:15:03.003381   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003387   40893 command_runner.go:130] > # uid_mappings = ""
	I0719 15:15:03.003393   40893 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0719 15:15:03.003400   40893 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0719 15:15:03.003404   40893 command_runner.go:130] > # separated by comma.
	I0719 15:15:03.003413   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003421   40893 command_runner.go:130] > # gid_mappings = ""
	I0719 15:15:03.003431   40893 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0719 15:15:03.003439   40893 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 15:15:03.003444   40893 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 15:15:03.003453   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003458   40893 command_runner.go:130] > # minimum_mappable_uid = -1
	I0719 15:15:03.003463   40893 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0719 15:15:03.003472   40893 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 15:15:03.003478   40893 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 15:15:03.003487   40893 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 15:15:03.003491   40893 command_runner.go:130] > # minimum_mappable_gid = -1
	I0719 15:15:03.003497   40893 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0719 15:15:03.003505   40893 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0719 15:15:03.003510   40893 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0719 15:15:03.003515   40893 command_runner.go:130] > # ctr_stop_timeout = 30
	I0719 15:15:03.003520   40893 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0719 15:15:03.003526   40893 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0719 15:15:03.003530   40893 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0719 15:15:03.003537   40893 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0719 15:15:03.003540   40893 command_runner.go:130] > drop_infra_ctr = false
	I0719 15:15:03.003547   40893 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0719 15:15:03.003558   40893 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0719 15:15:03.003567   40893 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0719 15:15:03.003571   40893 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0719 15:15:03.003580   40893 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0719 15:15:03.003585   40893 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0719 15:15:03.003596   40893 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0719 15:15:03.003601   40893 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0719 15:15:03.003608   40893 command_runner.go:130] > # shared_cpuset = ""
	I0719 15:15:03.003613   40893 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0719 15:15:03.003620   40893 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0719 15:15:03.003625   40893 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0719 15:15:03.003633   40893 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0719 15:15:03.003639   40893 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0719 15:15:03.003644   40893 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0719 15:15:03.003659   40893 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0719 15:15:03.003666   40893 command_runner.go:130] > # enable_criu_support = false
	I0719 15:15:03.003675   40893 command_runner.go:130] > # Enable/disable the generation of the container,
	I0719 15:15:03.003683   40893 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0719 15:15:03.003687   40893 command_runner.go:130] > # enable_pod_events = false
	I0719 15:15:03.003693   40893 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 15:15:03.003705   40893 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 15:15:03.003712   40893 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0719 15:15:03.003716   40893 command_runner.go:130] > # default_runtime = "runc"
	I0719 15:15:03.003726   40893 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0719 15:15:03.003734   40893 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0719 15:15:03.003744   40893 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0719 15:15:03.003752   40893 command_runner.go:130] > # creation as a file is not desired either.
	I0719 15:15:03.003760   40893 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0719 15:15:03.003766   40893 command_runner.go:130] > # the hostname is being managed dynamically.
	I0719 15:15:03.003770   40893 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0719 15:15:03.003773   40893 command_runner.go:130] > # ]
	I0719 15:15:03.003779   40893 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0719 15:15:03.003787   40893 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0719 15:15:03.003792   40893 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0719 15:15:03.003799   40893 command_runner.go:130] > # Each entry in the table should follow the format:
	I0719 15:15:03.003802   40893 command_runner.go:130] > #
	I0719 15:15:03.003807   40893 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0719 15:15:03.003812   40893 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0719 15:15:03.003855   40893 command_runner.go:130] > # runtime_type = "oci"
	I0719 15:15:03.003862   40893 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0719 15:15:03.003866   40893 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0719 15:15:03.003870   40893 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0719 15:15:03.003874   40893 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0719 15:15:03.003880   40893 command_runner.go:130] > # monitor_env = []
	I0719 15:15:03.003884   40893 command_runner.go:130] > # privileged_without_host_devices = false
	I0719 15:15:03.003892   40893 command_runner.go:130] > # allowed_annotations = []
	I0719 15:15:03.003899   40893 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0719 15:15:03.003908   40893 command_runner.go:130] > # Where:
	I0719 15:15:03.003913   40893 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0719 15:15:03.003921   40893 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0719 15:15:03.003928   40893 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0719 15:15:03.003936   40893 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0719 15:15:03.003946   40893 command_runner.go:130] > #   in $PATH.
	I0719 15:15:03.003955   40893 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0719 15:15:03.003959   40893 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0719 15:15:03.003966   40893 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0719 15:15:03.003972   40893 command_runner.go:130] > #   state.
	I0719 15:15:03.003978   40893 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0719 15:15:03.003985   40893 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0719 15:15:03.003991   40893 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0719 15:15:03.003998   40893 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0719 15:15:03.004003   40893 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0719 15:15:03.004011   40893 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0719 15:15:03.004016   40893 command_runner.go:130] > #   The currently recognized values are:
	I0719 15:15:03.004024   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0719 15:15:03.004033   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0719 15:15:03.004038   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0719 15:15:03.004044   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0719 15:15:03.004052   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0719 15:15:03.004060   40893 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0719 15:15:03.004066   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0719 15:15:03.004074   40893 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0719 15:15:03.004079   40893 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0719 15:15:03.004087   40893 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0719 15:15:03.004091   40893 command_runner.go:130] > #   deprecated option "conmon".
	I0719 15:15:03.004100   40893 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0719 15:15:03.004104   40893 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0719 15:15:03.004113   40893 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0719 15:15:03.004119   40893 command_runner.go:130] > #   should be moved to the container's cgroup
	I0719 15:15:03.004125   40893 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0719 15:15:03.004132   40893 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0719 15:15:03.004139   40893 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0719 15:15:03.004145   40893 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0719 15:15:03.004149   40893 command_runner.go:130] > #
	I0719 15:15:03.004156   40893 command_runner.go:130] > # Using the seccomp notifier feature:
	I0719 15:15:03.004161   40893 command_runner.go:130] > #
	I0719 15:15:03.004169   40893 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0719 15:15:03.004175   40893 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0719 15:15:03.004185   40893 command_runner.go:130] > #
	I0719 15:15:03.004193   40893 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0719 15:15:03.004199   40893 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0719 15:15:03.004203   40893 command_runner.go:130] > #
	I0719 15:15:03.004208   40893 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0719 15:15:03.004211   40893 command_runner.go:130] > # feature.
	I0719 15:15:03.004214   40893 command_runner.go:130] > #
	I0719 15:15:03.004220   40893 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0719 15:15:03.004228   40893 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0719 15:15:03.004234   40893 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0719 15:15:03.004241   40893 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0719 15:15:03.004249   40893 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0719 15:15:03.004254   40893 command_runner.go:130] > #
	I0719 15:15:03.004259   40893 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0719 15:15:03.004267   40893 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0719 15:15:03.004270   40893 command_runner.go:130] > #
	I0719 15:15:03.004276   40893 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0719 15:15:03.004283   40893 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0719 15:15:03.004286   40893 command_runner.go:130] > #
	I0719 15:15:03.004292   40893 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0719 15:15:03.004300   40893 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0719 15:15:03.004304   40893 command_runner.go:130] > # limitation.
	I0719 15:15:03.004308   40893 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0719 15:15:03.004314   40893 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0719 15:15:03.004319   40893 command_runner.go:130] > runtime_type = "oci"
	I0719 15:15:03.004325   40893 command_runner.go:130] > runtime_root = "/run/runc"
	I0719 15:15:03.004329   40893 command_runner.go:130] > runtime_config_path = ""
	I0719 15:15:03.004335   40893 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0719 15:15:03.004339   40893 command_runner.go:130] > monitor_cgroup = "pod"
	I0719 15:15:03.004345   40893 command_runner.go:130] > monitor_exec_cgroup = ""
	I0719 15:15:03.004349   40893 command_runner.go:130] > monitor_env = [
	I0719 15:15:03.004355   40893 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 15:15:03.004359   40893 command_runner.go:130] > ]
	I0719 15:15:03.004364   40893 command_runner.go:130] > privileged_without_host_devices = false
	I0719 15:15:03.004372   40893 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0719 15:15:03.004377   40893 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0719 15:15:03.004389   40893 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0719 15:15:03.004402   40893 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0719 15:15:03.004413   40893 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0719 15:15:03.004420   40893 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0719 15:15:03.004429   40893 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0719 15:15:03.004438   40893 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0719 15:15:03.004445   40893 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0719 15:15:03.004452   40893 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0719 15:15:03.004457   40893 command_runner.go:130] > # Example:
	I0719 15:15:03.004462   40893 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0719 15:15:03.004468   40893 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0719 15:15:03.004472   40893 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0719 15:15:03.004477   40893 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0719 15:15:03.004483   40893 command_runner.go:130] > # cpuset = 0
	I0719 15:15:03.004487   40893 command_runner.go:130] > # cpushares = "0-1"
	I0719 15:15:03.004491   40893 command_runner.go:130] > # Where:
	I0719 15:15:03.004495   40893 command_runner.go:130] > # The workload name is workload-type.
	I0719 15:15:03.004503   40893 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0719 15:15:03.004510   40893 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0719 15:15:03.004517   40893 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0719 15:15:03.004525   40893 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0719 15:15:03.004532   40893 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0719 15:15:03.004537   40893 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0719 15:15:03.004543   40893 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0719 15:15:03.004549   40893 command_runner.go:130] > # Default value is set to true
	I0719 15:15:03.004554   40893 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0719 15:15:03.004561   40893 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0719 15:15:03.004567   40893 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0719 15:15:03.004574   40893 command_runner.go:130] > # Default value is set to 'false'
	I0719 15:15:03.004579   40893 command_runner.go:130] > # disable_hostport_mapping = false
	I0719 15:15:03.004587   40893 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0719 15:15:03.004590   40893 command_runner.go:130] > #
	I0719 15:15:03.004595   40893 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0719 15:15:03.004600   40893 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0719 15:15:03.004606   40893 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0719 15:15:03.004611   40893 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0719 15:15:03.004622   40893 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0719 15:15:03.004626   40893 command_runner.go:130] > [crio.image]
	I0719 15:15:03.004631   40893 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0719 15:15:03.004635   40893 command_runner.go:130] > # default_transport = "docker://"
	I0719 15:15:03.004640   40893 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0719 15:15:03.004645   40893 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0719 15:15:03.004649   40893 command_runner.go:130] > # global_auth_file = ""
	I0719 15:15:03.004655   40893 command_runner.go:130] > # The image used to instantiate infra containers.
	I0719 15:15:03.004660   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.004664   40893 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0719 15:15:03.004669   40893 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0719 15:15:03.004674   40893 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0719 15:15:03.004679   40893 command_runner.go:130] > # This option supports live configuration reload.
	I0719 15:15:03.004683   40893 command_runner.go:130] > # pause_image_auth_file = ""
	I0719 15:15:03.004687   40893 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0719 15:15:03.004693   40893 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0719 15:15:03.004698   40893 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0719 15:15:03.004703   40893 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0719 15:15:03.004707   40893 command_runner.go:130] > # pause_command = "/pause"
	I0719 15:15:03.004712   40893 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0719 15:15:03.004717   40893 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0719 15:15:03.004722   40893 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0719 15:15:03.004728   40893 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0719 15:15:03.004733   40893 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0719 15:15:03.004739   40893 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0719 15:15:03.004742   40893 command_runner.go:130] > # pinned_images = [
	I0719 15:15:03.004745   40893 command_runner.go:130] > # ]
	I0719 15:15:03.004750   40893 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0719 15:15:03.004756   40893 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0719 15:15:03.004761   40893 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0719 15:15:03.004766   40893 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0719 15:15:03.004773   40893 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0719 15:15:03.004776   40893 command_runner.go:130] > # signature_policy = ""
	I0719 15:15:03.004781   40893 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0719 15:15:03.004787   40893 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0719 15:15:03.004793   40893 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0719 15:15:03.004809   40893 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0719 15:15:03.004820   40893 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0719 15:15:03.004829   40893 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0719 15:15:03.004837   40893 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0719 15:15:03.004847   40893 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0719 15:15:03.004851   40893 command_runner.go:130] > # changing them here.
	I0719 15:15:03.004859   40893 command_runner.go:130] > # insecure_registries = [
	I0719 15:15:03.004863   40893 command_runner.go:130] > # ]
	I0719 15:15:03.004871   40893 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0719 15:15:03.004880   40893 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0719 15:15:03.004889   40893 command_runner.go:130] > # image_volumes = "mkdir"
	I0719 15:15:03.004904   40893 command_runner.go:130] > # Temporary directory to use for storing big files
	I0719 15:15:03.004911   40893 command_runner.go:130] > # big_files_temporary_dir = ""
	I0719 15:15:03.004917   40893 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0719 15:15:03.004923   40893 command_runner.go:130] > # CNI plugins.
	I0719 15:15:03.004927   40893 command_runner.go:130] > [crio.network]
	I0719 15:15:03.004935   40893 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0719 15:15:03.004940   40893 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0719 15:15:03.004947   40893 command_runner.go:130] > # cni_default_network = ""
	I0719 15:15:03.004952   40893 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0719 15:15:03.004960   40893 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0719 15:15:03.004965   40893 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0719 15:15:03.004971   40893 command_runner.go:130] > # plugin_dirs = [
	I0719 15:15:03.004974   40893 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0719 15:15:03.004980   40893 command_runner.go:130] > # ]
	I0719 15:15:03.004985   40893 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0719 15:15:03.004991   40893 command_runner.go:130] > [crio.metrics]
	I0719 15:15:03.004995   40893 command_runner.go:130] > # Globally enable or disable metrics support.
	I0719 15:15:03.005001   40893 command_runner.go:130] > enable_metrics = true
	I0719 15:15:03.005005   40893 command_runner.go:130] > # Specify enabled metrics collectors.
	I0719 15:15:03.005012   40893 command_runner.go:130] > # Per default all metrics are enabled.
	I0719 15:15:03.005018   40893 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0719 15:15:03.005026   40893 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0719 15:15:03.005031   40893 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0719 15:15:03.005039   40893 command_runner.go:130] > # metrics_collectors = [
	I0719 15:15:03.005044   40893 command_runner.go:130] > # 	"operations",
	I0719 15:15:03.005054   40893 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0719 15:15:03.005061   40893 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0719 15:15:03.005065   40893 command_runner.go:130] > # 	"operations_errors",
	I0719 15:15:03.005071   40893 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0719 15:15:03.005075   40893 command_runner.go:130] > # 	"image_pulls_by_name",
	I0719 15:15:03.005082   40893 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0719 15:15:03.005088   40893 command_runner.go:130] > # 	"image_pulls_failures",
	I0719 15:15:03.005093   40893 command_runner.go:130] > # 	"image_pulls_successes",
	I0719 15:15:03.005097   40893 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0719 15:15:03.005103   40893 command_runner.go:130] > # 	"image_layer_reuse",
	I0719 15:15:03.005107   40893 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0719 15:15:03.005113   40893 command_runner.go:130] > # 	"containers_oom_total",
	I0719 15:15:03.005117   40893 command_runner.go:130] > # 	"containers_oom",
	I0719 15:15:03.005120   40893 command_runner.go:130] > # 	"processes_defunct",
	I0719 15:15:03.005126   40893 command_runner.go:130] > # 	"operations_total",
	I0719 15:15:03.005130   40893 command_runner.go:130] > # 	"operations_latency_seconds",
	I0719 15:15:03.005135   40893 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0719 15:15:03.005139   40893 command_runner.go:130] > # 	"operations_errors_total",
	I0719 15:15:03.005145   40893 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0719 15:15:03.005149   40893 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0719 15:15:03.005155   40893 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0719 15:15:03.005160   40893 command_runner.go:130] > # 	"image_pulls_success_total",
	I0719 15:15:03.005166   40893 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0719 15:15:03.005170   40893 command_runner.go:130] > # 	"containers_oom_count_total",
	I0719 15:15:03.005176   40893 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0719 15:15:03.005185   40893 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0719 15:15:03.005190   40893 command_runner.go:130] > # ]
	I0719 15:15:03.005195   40893 command_runner.go:130] > # The port on which the metrics server will listen.
	I0719 15:15:03.005200   40893 command_runner.go:130] > # metrics_port = 9090
	I0719 15:15:03.005205   40893 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0719 15:15:03.005208   40893 command_runner.go:130] > # metrics_socket = ""
	I0719 15:15:03.005215   40893 command_runner.go:130] > # The certificate for the secure metrics server.
	I0719 15:15:03.005221   40893 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0719 15:15:03.005229   40893 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0719 15:15:03.005233   40893 command_runner.go:130] > # certificate on any modification event.
	I0719 15:15:03.005237   40893 command_runner.go:130] > # metrics_cert = ""
	I0719 15:15:03.005328   40893 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0719 15:15:03.005461   40893 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0719 15:15:03.005474   40893 command_runner.go:130] > # metrics_key = ""
	I0719 15:15:03.005497   40893 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0719 15:15:03.005504   40893 command_runner.go:130] > [crio.tracing]
	I0719 15:15:03.005519   40893 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0719 15:15:03.005531   40893 command_runner.go:130] > # enable_tracing = false
	I0719 15:15:03.005578   40893 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0719 15:15:03.005629   40893 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0719 15:15:03.005649   40893 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0719 15:15:03.005662   40893 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0719 15:15:03.005679   40893 command_runner.go:130] > # CRI-O NRI configuration.
	I0719 15:15:03.005684   40893 command_runner.go:130] > [crio.nri]
	I0719 15:15:03.005691   40893 command_runner.go:130] > # Globally enable or disable NRI.
	I0719 15:15:03.005700   40893 command_runner.go:130] > # enable_nri = false
	I0719 15:15:03.005709   40893 command_runner.go:130] > # NRI socket to listen on.
	I0719 15:15:03.005721   40893 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0719 15:15:03.005728   40893 command_runner.go:130] > # NRI plugin directory to use.
	I0719 15:15:03.005735   40893 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0719 15:15:03.005743   40893 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0719 15:15:03.005755   40893 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0719 15:15:03.005763   40893 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0719 15:15:03.005769   40893 command_runner.go:130] > # nri_disable_connections = false
	I0719 15:15:03.005777   40893 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0719 15:15:03.005789   40893 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0719 15:15:03.005796   40893 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0719 15:15:03.005803   40893 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0719 15:15:03.005821   40893 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0719 15:15:03.005827   40893 command_runner.go:130] > [crio.stats]
	I0719 15:15:03.005836   40893 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0719 15:15:03.005844   40893 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0719 15:15:03.005856   40893 command_runner.go:130] > # stats_collection_period = 0
	I0719 15:15:03.006082   40893 cni.go:84] Creating CNI manager for ""
	I0719 15:15:03.006095   40893 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 15:15:03.006111   40893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:15:03.006147   40893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-121443 NodeName:multinode-121443 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:15:03.006415   40893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-121443"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:15:03.006496   40893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:15:03.018800   40893 command_runner.go:130] > kubeadm
	I0719 15:15:03.018826   40893 command_runner.go:130] > kubectl
	I0719 15:15:03.018833   40893 command_runner.go:130] > kubelet
	I0719 15:15:03.018854   40893 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:15:03.018956   40893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:15:03.029627   40893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0719 15:15:03.046862   40893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:15:03.064137   40893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 15:15:03.080619   40893 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0719 15:15:03.084525   40893 command_runner.go:130] > 192.168.39.32	control-plane.minikube.internal
	I0719 15:15:03.084595   40893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:15:03.220569   40893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:15:03.236315   40893 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443 for IP: 192.168.39.32
	I0719 15:15:03.236340   40893 certs.go:194] generating shared ca certs ...
	I0719 15:15:03.236370   40893 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:15:03.236513   40893 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:15:03.236550   40893 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:15:03.236559   40893 certs.go:256] generating profile certs ...
	I0719 15:15:03.236654   40893 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/client.key
	I0719 15:15:03.236708   40893 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.key.e7ca767b
	I0719 15:15:03.236745   40893 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.key
	I0719 15:15:03.236755   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 15:15:03.236770   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 15:15:03.236782   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 15:15:03.236795   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 15:15:03.236804   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 15:15:03.236818   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 15:15:03.236831   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 15:15:03.236842   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 15:15:03.236888   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:15:03.236916   40893 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:15:03.236926   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:15:03.236946   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:15:03.236967   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:15:03.236987   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:15:03.237022   40893 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:15:03.237047   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.237059   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem -> /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.237072   40893 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.237733   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:15:03.262858   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:15:03.287541   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:15:03.314697   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:15:03.339389   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:15:03.362633   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:15:03.386069   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:15:03.409114   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/multinode-121443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:15:03.432252   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:15:03.456053   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:15:03.481819   40893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:15:03.505986   40893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:15:03.523239   40893 ssh_runner.go:195] Run: openssl version
	I0719 15:15:03.529542   40893 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 15:15:03.529602   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:15:03.541105   40893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.545516   40893 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.545540   40893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.545576   40893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:15:03.551087   40893 command_runner.go:130] > b5213941
	I0719 15:15:03.551226   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:15:03.561292   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:15:03.572703   40893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.577322   40893 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.577356   40893 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.577394   40893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:15:03.583490   40893 command_runner.go:130] > 51391683
	I0719 15:15:03.583554   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:15:03.594272   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:15:03.606354   40893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.610887   40893 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.610916   40893 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.610956   40893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:15:03.616617   40893 command_runner.go:130] > 3ec20f2e
	I0719 15:15:03.616700   40893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:15:03.627166   40893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:15:03.632433   40893 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:15:03.632463   40893 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 15:15:03.632471   40893 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0719 15:15:03.632480   40893 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 15:15:03.632491   40893 command_runner.go:130] > Access: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632502   40893 command_runner.go:130] > Modify: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632510   40893 command_runner.go:130] > Change: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632517   40893 command_runner.go:130] >  Birth: 2024-07-19 15:08:08.603207565 +0000
	I0719 15:15:03.632564   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:15:03.638428   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.638495   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:15:03.643992   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.644162   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:15:03.649767   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.649943   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:15:03.655723   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.655876   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:15:03.661552   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.661769   40893 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:15:03.667625   40893 command_runner.go:130] > Certificate will not expire
	I0719 15:15:03.667696   40893 kubeadm.go:392] StartCluster: {Name:multinode-121443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-121443 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.166 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:15:03.667809   40893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:15:03.667854   40893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:15:03.706139   40893 command_runner.go:130] > dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f
	I0719 15:15:03.706160   40893 command_runner.go:130] > 502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f
	I0719 15:15:03.706166   40893 command_runner.go:130] > 4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24
	I0719 15:15:03.706174   40893 command_runner.go:130] > fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6
	I0719 15:15:03.706186   40893 command_runner.go:130] > 5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf
	I0719 15:15:03.706194   40893 command_runner.go:130] > d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c
	I0719 15:15:03.706205   40893 command_runner.go:130] > 713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8
	I0719 15:15:03.706218   40893 command_runner.go:130] > b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66
	I0719 15:15:03.706255   40893 cri.go:89] found id: "dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f"
	I0719 15:15:03.706267   40893 cri.go:89] found id: "502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f"
	I0719 15:15:03.706273   40893 cri.go:89] found id: "4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24"
	I0719 15:15:03.706278   40893 cri.go:89] found id: "fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6"
	I0719 15:15:03.706282   40893 cri.go:89] found id: "5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf"
	I0719 15:15:03.706287   40893 cri.go:89] found id: "d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c"
	I0719 15:15:03.706292   40893 cri.go:89] found id: "713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8"
	I0719 15:15:03.706296   40893 cri.go:89] found id: "b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66"
	I0719 15:15:03.706301   40893 cri.go:89] found id: ""
	I0719 15:15:03.706343   40893 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.396939012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402355396919395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f750aa5c-5b50-40d5-8556-94ea0d41f93b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.397818974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a0cece4-a4ff-4cae-ad2c-b13ff0c7152e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.397892399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a0cece4-a4ff-4cae-ad2c-b13ff0c7152e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.398380088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a0cece4-a4ff-4cae-ad2c-b13ff0c7152e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.445244064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd1e2cc7-ea10-4e6c-8be8-2c54c7e02139 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.445333096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd1e2cc7-ea10-4e6c-8be8-2c54c7e02139 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.446722940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=550c0da3-5e4b-4faa-a076-987b3a56ec43 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.447124694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402355447102220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=550c0da3-5e4b-4faa-a076-987b3a56ec43 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.447847637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46aff0fb-70f8-4c4c-9658-85ebf5274c8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.447907172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46aff0fb-70f8-4c4c-9658-85ebf5274c8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.448347113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46aff0fb-70f8-4c4c-9658-85ebf5274c8c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.489154883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d8ff2d4-8be3-4922-bc41-e1f6ddaf657b name=/runtime.v1.RuntimeService/Version
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.489234908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d8ff2d4-8be3-4922-bc41-e1f6ddaf657b name=/runtime.v1.RuntimeService/Version
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.490884421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03c7073a-bc3e-425a-8235-37d4c7704158 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.491474650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402355491379051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03c7073a-bc3e-425a-8235-37d4c7704158 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.491971483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f206b46-3fc4-4db4-9859-895389e3b63f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.492041802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f206b46-3fc4-4db4-9859-895389e3b63f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.492523354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f206b46-3fc4-4db4-9859-895389e3b63f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.535019340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4bcc3e2-f999-4f57-b38c-667cf61402f0 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.535108225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4bcc3e2-f999-4f57-b38c-667cf61402f0 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.536604757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1cc81f9f-9b35-4a01-9b87-730c63dca632 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.537074721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721402355537051357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cc81f9f-9b35-4a01-9b87-730c63dca632 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.537619709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaeec51d-5a6e-4573-bf5f-4c41b6d7b96a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.537690508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaeec51d-5a6e-4573-bf5f-4c41b6d7b96a name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:19:15 multinode-121443 crio[2888]: time="2024-07-19 15:19:15.538059757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6730504f064a0d1e6bc4df6ddf8008565b7eda87bcdfde5f13cf1e79e4c8f084,PodSandboxId:cd76fcf9d96bc7bc4d1e5ac11c0c6f8b66c8e3c12803dc980ee055d1bac2f97e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721402143875080087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3,PodSandboxId:83ee156bab4c186a7c6b1c4fd09a59a6665d9791d48ef5db22a3f9659e62f99b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721402110427146635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db,PodSandboxId:5ec847a2909166c0dec1cdf375bb9c2863b32e705078b79acf82c20877141cb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721402110293246110,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33,PodSandboxId:0dc484e71d384e31033fa730db03060c7e202ff382dce670e1754fbe531cf522,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721402110114882319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f-be575c590316,},Annotations:map[string]
string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4359892cfd8b910459ce172036ad44abe83b99d5beaee963c8f39e4ffb7a0cea,PodSandboxId:5f4e88c7fcb7c31c80781d006c70771aa5be498ad8ae5aad8103a57aadacf6c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721402110183511613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.ku
bernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442,PodSandboxId:e53ea24eb162b1f57289817c7fb70b43d69b0c23b946ff35b6f4dfb4edb8bf10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721402106369695782,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io.kubernetes.container.hash: 20862aae,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799,PodSandboxId:ece4c48537ed9ca555e683fe1145db101ff0f2fe387cdac3c09228d4382c9eb3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721402106340894905,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b201946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376,PodSandboxId:d1b8609f9f04e127792bfa7ea316f19acbd833f173f0eae166351c0f3b05b9bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721402106267811310,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27,PodSandboxId:5499a35d5feaa57755104120030c19d2a1beb160edc72ccd892e83f1c2dcb027,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721402106285675412,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ed0e950737fb032bd900555e5cbe14c3ac4ec5b8a66dec0c7014e4002b83cd1,PodSandboxId:68c7daeecb45883bd9a3309f6ba1d8225eb99010a84fe3cf5de9c19adc7ffdff,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721401779955017642,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9h6kk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 662b2304-ae04-4f1c-9246-952f88717e35,},Annotations:map[string]string{io.kubernetes.container.hash: f348896d,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f,PodSandboxId:4b7d7142af3bd44a27b2434583a8195d7033afb4ed2529461d7c104986418b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721401724536462588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n7t8w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf335596-f0a3-4e1f-ac5f-872595652c60,},Annotations:map[string]string{io.kubernetes.container.hash: 95f0eaa5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:502bf142ce57cdf3a4adfb7ef0a34894c5d22772e2f62c571223c2123c33165f,PodSandboxId:e713a189f41efc67788d9dbb6b7208edd5bd9deea39c0fcbb64b497ad7c5c107,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721401724463062067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d0b632c7-c920-41a3-92ba-97091eb2779b,},Annotations:map[string]string{io.kubernetes.container.hash: f64ae037,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24,PodSandboxId:d27205d1dc010e54636d402121681a90cf8103140e91e64c29133f0dd8014d9f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721401712631931663,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5zklk,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63,},Annotations:map[string]string{io.kubernetes.container.hash: 8a927db2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6,PodSandboxId:208cafba1ee95844a68680de634f40025d0246cf1fe17e70132adbcbf45e4561,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721401712439005538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lfgrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89ea64a0-446e-407e-af1f
-be575c590316,},Annotations:map[string]string{io.kubernetes.container.hash: 92395519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c,PodSandboxId:eed93d6bc6357070aabcd0f04a9a2e036b7cf7209a61cc50d632f5813b735f5d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721401692021278002,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53658b2
01946db2ee70c7e306511715d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8,PodSandboxId:8c95efc9309d5cadd91deeca4963ee1ad959500079b68a9e44c12a33721dcb60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721401692017726027,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 298b745fc1a3a70c04175b
17b5b8937a,},Annotations:map[string]string{io.kubernetes.container.hash: 101ae03f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf,PodSandboxId:df453b9ddc91351a03677bdcf6548802736eeb7d474e9ebc452afe6d1d5346a8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721401692052166758,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f347dcbbc2d9f1a2ceddb134ff8b68a6,},Annotations:map[string]string{io
.kubernetes.container.hash: 20862aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66,PodSandboxId:5cc05efdd9692f8994a8c1ef1daedfa0c398b42bb31bb4f2fc8dd2aec3986164,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721401691992190472,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-121443,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c598146dc50663969c5b831d9a101208,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaeec51d-5a6e-4573-bf5f-4c41b6d7b96a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6730504f064a0       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   cd76fcf9d96bc       busybox-fc5497c4f-9h6kk
	b675e4fd02893       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      4 minutes ago       Running             kindnet-cni               1                   83ee156bab4c1       kindnet-5zklk
	3f96708926155       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   5ec847a290916       coredns-7db6d8ff4d-n7t8w
	4359892cfd8b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   5f4e88c7fcb7c       storage-provisioner
	221882dff6c78       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   0dc484e71d384       kube-proxy-lfgrb
	c8218905c2e97       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   e53ea24eb162b       etcd-multinode-121443
	293204b7a9058       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   ece4c48537ed9       kube-controller-manager-multinode-121443
	fa302c97d4878       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   5499a35d5feaa       kube-scheduler-multinode-121443
	e2dc920afc846       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   d1b8609f9f04e       kube-apiserver-multinode-121443
	9ed0e950737fb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   68c7daeecb458       busybox-fc5497c4f-9h6kk
	dc5476a467779       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   4b7d7142af3bd       coredns-7db6d8ff4d-n7t8w
	502bf142ce57c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   e713a189f41ef       storage-provisioner
	4bad540742a5a       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      10 minutes ago      Exited              kindnet-cni               0                   d27205d1dc010       kindnet-5zklk
	fca6c86e8784a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   208cafba1ee95       kube-proxy-lfgrb
	5fe052e6e6bde       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   df453b9ddc913       etcd-multinode-121443
	d5d6d5432c5ba       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   eed93d6bc6357       kube-controller-manager-multinode-121443
	713959ddae427       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   8c95efc9309d5       kube-apiserver-multinode-121443
	b48a01b01787f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   5cc05efdd9692       kube-scheduler-multinode-121443
	
	
	==> coredns [3f967089261559dd8fe89bce56dc68dd0cfc0ba001d43bf6cc40e2d2cdb431db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48649 - 2885 "HINFO IN 1538708654439137221.1084675691582257083. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016121561s
	
	
	==> coredns [dc5476a467779add8e0999ede8a586e66840e8cb47e9fcbab13b5bf34161cd7f] <==
	[INFO] 10.244.1.2:57105 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001844812s
	[INFO] 10.244.1.2:43685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108904s
	[INFO] 10.244.1.2:45894 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105631s
	[INFO] 10.244.1.2:47255 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001257652s
	[INFO] 10.244.1.2:37025 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062341s
	[INFO] 10.244.1.2:47296 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053579s
	[INFO] 10.244.1.2:37030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100183s
	[INFO] 10.244.0.3:40198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130982s
	[INFO] 10.244.0.3:58455 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064572s
	[INFO] 10.244.0.3:36902 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083042s
	[INFO] 10.244.0.3:56286 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057483s
	[INFO] 10.244.1.2:35076 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101714s
	[INFO] 10.244.1.2:49410 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000134587s
	[INFO] 10.244.1.2:48107 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066565s
	[INFO] 10.244.1.2:59682 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107312s
	[INFO] 10.244.0.3:50711 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079336s
	[INFO] 10.244.0.3:52831 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112951s
	[INFO] 10.244.0.3:43664 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070195s
	[INFO] 10.244.0.3:57699 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000126789s
	[INFO] 10.244.1.2:37267 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205656s
	[INFO] 10.244.1.2:49685 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109151s
	[INFO] 10.244.1.2:40234 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100147s
	[INFO] 10.244.1.2:50205 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066778s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-121443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-121443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=multinode-121443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_08_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:08:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-121443
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:19:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:15:09 +0000   Fri, 19 Jul 2024 15:08:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    multinode-121443
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8590a9863ad4803a0c4feae042fd645
	  System UUID:                b8590a98-63ad-4803-a0c4-feae042fd645
	  Boot ID:                    acc7e1ed-b057-4c29-a709-81cc8cb1ff0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9h6kk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	  kube-system                 coredns-7db6d8ff4d-n7t8w                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-121443                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-5zklk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-121443             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-121443    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-lfgrb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-121443             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-121443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-121443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-121443 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-121443 event: Registered Node multinode-121443 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-121443 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-121443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-121443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-121443 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node multinode-121443 event: Registered Node multinode-121443 in Controller
	
	
	Name:               multinode-121443-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-121443-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=multinode-121443
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T15_15_51_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:15:51 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-121443-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:16:51 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:17:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:17:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:17:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 15:16:21 +0000   Fri, 19 Jul 2024 15:17:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    multinode-121443-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e1fb89433da44eeaf6316cfc1c10470
	  System UUID:                9e1fb894-33da-44ee-af63-16cfc1c10470
	  Boot ID:                    0ec49468-fdfc-4697-a6b7-db70fa7c24fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sr7hh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kindnet-5lddz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-gvgth           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)      kubelet          Node multinode-121443-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)      kubelet          Node multinode-121443-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)      kubelet          Node multinode-121443-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m41s                  kubelet          Node multinode-121443-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-121443-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-121443-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-121443-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-121443-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-121443-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061431] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049950] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.174577] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.144673] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.288594] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.252066] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.530601] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.064557] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989589] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.084264] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.997931] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.094934] systemd-fstab-generator[1468]: Ignoring "noauto" option for root device
	[ +13.142270] kauditd_printk_skb: 60 callbacks suppressed
	[Jul19 15:09] kauditd_printk_skb: 12 callbacks suppressed
	[Jul19 15:15] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.158606] systemd-fstab-generator[2817]: Ignoring "noauto" option for root device
	[  +0.181983] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.159953] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.286473] systemd-fstab-generator[2871]: Ignoring "noauto" option for root device
	[  +0.816751] systemd-fstab-generator[2972]: Ignoring "noauto" option for root device
	[  +2.262865] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +4.619606] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.875153] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.969547] systemd-fstab-generator[3931]: Ignoring "noauto" option for root device
	[ +16.965209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [5fe052e6e6bde7323a1a563bd01e7daaf8feda81a22a1bb12c8dd2d42b05e0bf] <==
	{"level":"warn","ts":"2024-07-19T15:09:14.868934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.040131ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316538273496601209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-rnd69\" mod_revision:441 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-rnd69\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-rnd69\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T15:09:14.869317Z","caller":"traceutil/trace.go:171","msg":"trace[760675968] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"116.730086ms","start":"2024-07-19T15:09:14.752553Z","end":"2024-07-19T15:09:14.869283Z","steps":["trace[760675968] 'compare'  (duration: 113.854721ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:09:15.005176Z","caller":"traceutil/trace.go:171","msg":"trace[1572115174] linearizableReadLoop","detail":"{readStateIndex:466; appliedIndex:465; }","duration":"105.623535ms","start":"2024-07-19T15:09:14.899538Z","end":"2024-07-19T15:09:15.005161Z","steps":["trace[1572115174] 'read index received'  (duration: 104.56728ms)","trace[1572115174] 'applied index is now lower than readState.Index'  (duration: 1.055611ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T15:09:15.005264Z","caller":"traceutil/trace.go:171","msg":"trace[166726604] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"131.167486ms","start":"2024-07-19T15:09:14.874091Z","end":"2024-07-19T15:09:15.005258Z","steps":["trace[166726604] 'process raft request'  (duration: 130.186366ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:09:15.005515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.869098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-121443-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T15:09:15.005553Z","caller":"traceutil/trace.go:171","msg":"trace[1300675654] range","detail":"{range_begin:/registry/minions/multinode-121443-m02; range_end:; response_count:0; response_revision:443; }","duration":"106.057329ms","start":"2024-07-19T15:09:14.89949Z","end":"2024-07-19T15:09:15.005547Z","steps":["trace[1300675654] 'agreement among raft nodes before linearized reading'  (duration: 105.876174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:09:20.091469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.709247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T15:09:20.091537Z","caller":"traceutil/trace.go:171","msg":"trace[1183754555] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:480; }","duration":"159.862155ms","start":"2024-07-19T15:09:19.931659Z","end":"2024-07-19T15:09:20.091522Z","steps":["trace[1183754555] 'count revisions from in-memory index tree'  (duration: 159.640238ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:09:20.091758Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.286032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-121443-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-07-19T15:09:20.091799Z","caller":"traceutil/trace.go:171","msg":"trace[1650685763] range","detail":"{range_begin:/registry/minions/multinode-121443-m02; range_end:; response_count:1; response_revision:480; }","duration":"201.329204ms","start":"2024-07-19T15:09:19.890462Z","end":"2024-07-19T15:09:20.091791Z","steps":["trace[1650685763] 'range keys from in-memory index tree'  (duration: 201.188902ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:10:15.861948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.741147ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7316538273496601688 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-121443-m03.17e3a5d4b6616e41\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-121443-m03.17e3a5d4b6616e41\" value_size:646 lease:7316538273496601280 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T15:10:15.862534Z","caller":"traceutil/trace.go:171","msg":"trace[1149755698] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"247.661244ms","start":"2024-07-19T15:10:15.61486Z","end":"2024-07-19T15:10:15.862521Z","steps":["trace[1149755698] 'process raft request'  (duration: 78.305828ms)","trace[1149755698] 'compare'  (duration: 168.632811ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T15:10:15.862846Z","caller":"traceutil/trace.go:171","msg":"trace[477100261] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"176.181093ms","start":"2024-07-19T15:10:15.686654Z","end":"2024-07-19T15:10:15.862835Z","steps":["trace[477100261] 'process raft request'  (duration: 175.794065ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:10:17.496844Z","caller":"traceutil/trace.go:171","msg":"trace[1953329838] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"180.89928ms","start":"2024-07-19T15:10:17.31593Z","end":"2024-07-19T15:10:17.496829Z","steps":["trace[1953329838] 'process raft request'  (duration: 180.80395ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:10:17.686372Z","caller":"traceutil/trace.go:171","msg":"trace[388183798] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"102.666999ms","start":"2024-07-19T15:10:17.58369Z","end":"2024-07-19T15:10:17.686357Z","steps":["trace[388183798] 'process raft request'  (duration: 101.655209ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T15:13:30.132174Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T15:13:30.132371Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-121443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	{"level":"warn","ts":"2024-07-19T15:13:30.132519Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T15:13:30.13261Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T15:13:30.183978Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T15:13:30.184053Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.32:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T15:13:30.184133Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d4c05646b7156589","current-leader-member-id":"d4c05646b7156589"}
	{"level":"info","ts":"2024-07-19T15:13:30.188681Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:13:30.188851Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:13:30.188881Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-121443","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"]}
	
	
	==> etcd [c8218905c2e97e859c331e642a11ea8a618c1e7b5312e38099685fc97c4d6442] <==
	{"level":"info","ts":"2024-07-19T15:15:06.860873Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:15:06.860883Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:15:06.861134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 switched to configuration voters=(15330347993288500617)"}
	{"level":"info","ts":"2024-07-19T15:15:06.861211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","added-peer-id":"d4c05646b7156589","added-peer-peer-urls":["https://192.168.39.32:2380"]}
	{"level":"info","ts":"2024-07-19T15:15:06.861357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:15:06.861454Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:15:06.86937Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:15:06.880486Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:15:06.888075Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2024-07-19T15:15:06.897552Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:15:06.899465Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:15:08.160674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:15:08.160759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:15:08.160786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2024-07-19T15:15:08.160799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.160809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.16082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.16083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2024-07-19T15:15:08.169566Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:multinode-121443 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:15:08.169793Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:15:08.169936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:15:08.169998Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:15:08.170089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:15:08.172144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2024-07-19T15:15:08.172237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:19:16 up 11 min,  0 users,  load average: 0.05, 0.21, 0.14
	Linux multinode-121443 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4bad540742a5ad26c53c54d354f645140dbb6e07e2bc385eba9bc7258b759d24] <==
	I0719 15:12:43.679570       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:12:53.679722       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:12:53.679839       1 main.go:303] handling current node
	I0719 15:12:53.679891       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:12:53.679912       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:12:53.680112       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:12:53.680141       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:13:03.677611       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:13:03.677667       1 main.go:303] handling current node
	I0719 15:13:03.677685       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:13:03.677692       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:13:03.677829       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:13:03.677859       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:13:13.678597       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:13:13.678668       1 main.go:303] handling current node
	I0719 15:13:13.678692       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:13:13.678698       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:13:13.678905       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:13:13.678930       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	I0719 15:13:23.676957       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:13:23.677066       1 main.go:303] handling current node
	I0719 15:13:23.677095       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:13:23.677114       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:13:23.677315       1 main.go:299] Handling node with IPs: map[192.168.39.166:{}]
	I0719 15:13:23.677339       1 main.go:326] Node multinode-121443-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b675e4fd028938968d0f46c35c6bf21b5f1be74b457e59fa4551e7e16a00f6d3] <==
	I0719 15:18:11.387449       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:18:21.390475       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:18:21.390519       1 main.go:303] handling current node
	I0719 15:18:21.390532       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:18:21.390537       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:18:31.387032       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:18:31.387861       1 main.go:303] handling current node
	I0719 15:18:31.387891       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:18:31.387912       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:18:41.387112       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:18:41.387558       1 main.go:303] handling current node
	I0719 15:18:41.387643       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:18:41.387688       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:18:51.387686       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:18:51.387894       1 main.go:303] handling current node
	I0719 15:18:51.387938       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:18:51.387957       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:19:01.395946       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:19:01.396007       1 main.go:303] handling current node
	I0719 15:19:01.396039       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:19:01.396045       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	I0719 15:19:11.387553       1 main.go:299] Handling node with IPs: map[192.168.39.32:{}]
	I0719 15:19:11.387601       1 main.go:303] handling current node
	I0719 15:19:11.387616       1 main.go:299] Handling node with IPs: map[192.168.39.226:{}]
	I0719 15:19:11.387622       1 main.go:326] Node multinode-121443-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [713959ddae4278e8364bd5b4f5ae719f0681c4a9dd03a18e9eaee7e6ec5ab3b8] <==
	W0719 15:13:30.165094       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165153       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165196       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165259       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165329       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165391       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165861       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165930       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.165970       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166029       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166096       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166216       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166315       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166381       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166484       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166525       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166584       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166650       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166712       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166769       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.166830       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167006       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167070       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167126       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:13:30.167183       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e2dc920afc846b567f130afa42943f3e5a00abe3578b63337b141740fdde6376] <==
	I0719 15:15:09.509207       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 15:15:09.509894       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 15:15:09.519320       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 15:15:09.519435       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 15:15:09.519476       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 15:15:09.519832       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 15:15:09.519889       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 15:15:09.519992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 15:15:09.525131       1 aggregator.go:165] initial CRD sync complete...
	I0719 15:15:09.525197       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 15:15:09.525223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 15:15:09.525246       1 cache.go:39] Caches are synced for autoregister controller
	I0719 15:15:09.526828       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0719 15:15:09.542491       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 15:15:09.563746       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 15:15:09.563785       1 policy_source.go:224] refreshing policies
	I0719 15:15:09.590921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 15:15:10.447055       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 15:15:11.625180       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 15:15:11.773694       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 15:15:11.787734       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 15:15:11.861837       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 15:15:11.867940       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 15:15:21.853201       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 15:15:21.857028       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [293204b7a9058dd42fab27ccfec3b034255147c7e4090a21b85f6ec89ba74799] <==
	I0719 15:15:51.297798       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m02" podCIDRs=["10.244.1.0/24"]
	I0719 15:15:52.446489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.824µs"
	I0719 15:15:53.162165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.304µs"
	I0719 15:15:53.178570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.783µs"
	I0719 15:15:53.190560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.23µs"
	I0719 15:15:53.254221       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.609µs"
	I0719 15:15:53.259200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.65µs"
	I0719 15:15:53.263451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.613µs"
	I0719 15:16:11.073445       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:16:11.091898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.871µs"
	I0719 15:16:11.108064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.507µs"
	I0719 15:16:14.710900       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.836319ms"
	I0719 15:16:14.711118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.162µs"
	I0719 15:16:29.238334       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:16:30.266267       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:16:30.266904       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m03\" does not exist"
	I0719 15:16:30.285598       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m03" podCIDRs=["10.244.2.0/24"]
	I0719 15:16:49.101490       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m03"
	I0719 15:16:54.327767       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:17:32.019353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.05872ms"
	I0719 15:17:32.019985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.279µs"
	I0719 15:17:41.943519       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hdr4s"
	I0719 15:17:41.968171       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hdr4s"
	I0719 15:17:41.968220       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fnr7q"
	I0719 15:17:41.991668       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fnr7q"
	
	
	==> kube-controller-manager [d5d6d5432c5baba706e5dd057dea92f4b8827ab978b05aa6958f54edf65c0a9c] <==
	I0719 15:09:15.054245       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m02\" does not exist"
	I0719 15:09:15.069951       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m02" podCIDRs=["10.244.1.0/24"]
	I0719 15:09:15.140929       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-121443-m02"
	I0719 15:09:34.886157       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:09:36.954146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.774533ms"
	I0719 15:09:36.967723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.348543ms"
	I0719 15:09:36.967838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.963µs"
	I0719 15:09:36.982525       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="164.868µs"
	I0719 15:09:40.742188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.3474ms"
	I0719 15:09:40.742484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.624µs"
	I0719 15:09:41.875270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.44058ms"
	I0719 15:09:41.875574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.304µs"
	I0719 15:10:15.865836       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m03\" does not exist"
	I0719 15:10:15.866854       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:10:15.876812       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m03" podCIDRs=["10.244.2.0/24"]
	I0719 15:10:20.162787       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-121443-m03"
	I0719 15:10:35.769104       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:11:03.796845       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:11:04.783905       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-121443-m03\" does not exist"
	I0719 15:11:04.784320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:11:04.795896       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-121443-m03" podCIDRs=["10.244.3.0/24"]
	I0719 15:11:24.660227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m02"
	I0719 15:12:10.220075       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-121443-m03"
	I0719 15:12:10.268830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.784234ms"
	I0719 15:12:10.268942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.248µs"
	
	
	==> kube-proxy [221882dff6c781f7c3ac121b9c680878555e058b5f5a54a621bfa4bc85088d33] <==
	I0719 15:15:10.528640       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:15:10.549708       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	I0719 15:15:10.651440       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:15:10.651499       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:15:10.651529       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:15:10.656722       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:15:10.656949       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:15:10.656979       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:15:10.659339       1 config.go:192] "Starting service config controller"
	I0719 15:15:10.659381       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:15:10.659487       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:15:10.659508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:15:10.660125       1 config.go:319] "Starting node config controller"
	I0719 15:15:10.660149       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:15:10.759505       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:15:10.759611       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:15:10.760232       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fca6c86e8784ab44d55b756c31dfd1513e3d0f57c6f7b8a8861a73dcc7f431a6] <==
	I0719 15:08:32.599334       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:08:32.622328       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.32"]
	I0719 15:08:32.670526       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:08:32.670573       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:08:32.670589       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:08:32.674670       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:08:32.674901       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:08:32.675026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:08:32.676197       1 config.go:192] "Starting service config controller"
	I0719 15:08:32.676243       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:08:32.676361       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:08:32.676384       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:08:32.678918       1 config.go:319] "Starting node config controller"
	I0719 15:08:32.678954       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:08:32.776682       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:08:32.776766       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:08:32.779358       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b48a01b01787fbc6c8cd7a8bdf32fcbb90a253bbb5c1f02ad3cf51fd8ed66a66] <==
	E0719 15:08:14.729744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 15:08:14.728262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 15:08:14.729774       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 15:08:14.724638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:14.729808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.557892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 15:08:15.557943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 15:08:15.574341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 15:08:15.574389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 15:08:15.700606       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:15.700654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.733301       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 15:08:15.733754       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:08:15.742650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 15:08:15.743025       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 15:08:15.743630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 15:08:15.743703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 15:08:15.859851       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:15.859964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.913469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 15:08:15.913645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 15:08:15.959038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 15:08:15.959136       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0719 15:08:18.217576       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 15:13:30.123170       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fa302c97d48784208affd9caa1eb0241ed79c8aa79691b299fc0a361cef31e27] <==
	I0719 15:15:07.605899       1 serving.go:380] Generated self-signed cert in-memory
	W0719 15:15:09.452675       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:15:09.452821       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:15:09.452855       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:15:09.452928       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:15:09.528285       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:15:09.531261       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:15:09.542188       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:15:09.542238       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:15:09.543018       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:15:09.543120       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 15:15:09.643248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766211    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63-cni-cfg\") pod \"kindnet-5zklk\" (UID: \"e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63\") " pod="kube-system/kindnet-5zklk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766257    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63-xtables-lock\") pod \"kindnet-5zklk\" (UID: \"e57ea910-98ff-4a51-a1fb-f6d2bf7fdc63\") " pod="kube-system/kindnet-5zklk"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766308    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89ea64a0-446e-407e-af1f-be575c590316-lib-modules\") pod \"kube-proxy-lfgrb\" (UID: \"89ea64a0-446e-407e-af1f-be575c590316\") " pod="kube-system/kube-proxy-lfgrb"
	Jul 19 15:15:09 multinode-121443 kubelet[3102]: I0719 15:15:09.766368    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89ea64a0-446e-407e-af1f-be575c590316-xtables-lock\") pod \"kube-proxy-lfgrb\" (UID: \"89ea64a0-446e-407e-af1f-be575c590316\") " pod="kube-system/kube-proxy-lfgrb"
	Jul 19 15:15:13 multinode-121443 kubelet[3102]: I0719 15:15:13.835113    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 19 15:16:05 multinode-121443 kubelet[3102]: E0719 15:16:05.683476    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:16:05 multinode-121443 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 15:17:05 multinode-121443 kubelet[3102]: E0719 15:17:05.683167    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 15:17:05 multinode-121443 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:17:05 multinode-121443 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:17:05 multinode-121443 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:17:05 multinode-121443 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 15:18:05 multinode-121443 kubelet[3102]: E0719 15:18:05.684047    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 15:18:05 multinode-121443 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:18:05 multinode-121443 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:18:05 multinode-121443 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:18:05 multinode-121443 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 15:19:05 multinode-121443 kubelet[3102]: E0719 15:19:05.683667    3102 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 15:19:05 multinode-121443 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:19:05 multinode-121443 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:19:05 multinode-121443 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:19:05 multinode-121443 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:19:15.106552   43196 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-3847/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-121443 -n multinode-121443
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-121443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.28s)

                                                
                                    
x
+
TestPreload (352.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-670287 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0719 15:24:11.795571   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 15:24:28.744344   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-670287 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m29.997784307s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-670287 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-670287 image pull gcr.io/k8s-minikube/busybox: (2.865897491s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-670287
E0719 15:27:29.031891   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-670287: exit status 82 (2m0.46760399s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-670287"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-670287 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-19 15:28:33.997325783 +0000 UTC m=+4088.792912304
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-670287 -n test-preload-670287
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-670287 -n test-preload-670287: exit status 3 (18.568498603s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:28:52.562565   46319 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0719 15:28:52.562633   46319 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-670287" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-670287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-670287
--- FAIL: TestPreload (352.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (405.49s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.925432799s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-574044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-574044" primary control-plane node in "kubernetes-upgrade-574044" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:30:47.250306   47427 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:30:47.250429   47427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:30:47.250438   47427 out.go:304] Setting ErrFile to fd 2...
	I0719 15:30:47.250444   47427 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:30:47.250623   47427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:30:47.251112   47427 out.go:298] Setting JSON to false
	I0719 15:30:47.251936   47427 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4393,"bootTime":1721398654,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:30:47.251987   47427 start.go:139] virtualization: kvm guest
	I0719 15:30:47.253918   47427 out.go:177] * [kubernetes-upgrade-574044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:30:47.255672   47427 notify.go:220] Checking for updates...
	I0719 15:30:47.256735   47427 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:30:47.259014   47427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:30:47.260442   47427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:30:47.261493   47427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:30:47.262888   47427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:30:47.264282   47427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:30:47.265878   47427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:30:47.301541   47427 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 15:30:47.302939   47427 start.go:297] selected driver: kvm2
	I0719 15:30:47.302954   47427 start.go:901] validating driver "kvm2" against <nil>
	I0719 15:30:47.302967   47427 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:30:47.303945   47427 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:30:47.304042   47427 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:30:47.320685   47427 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:30:47.320762   47427 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 15:30:47.321084   47427 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 15:30:47.321109   47427 cni.go:84] Creating CNI manager for ""
	I0719 15:30:47.321116   47427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:30:47.321122   47427 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:30:47.321187   47427 start.go:340] cluster config:
	{Name:kubernetes-upgrade-574044 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-574044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:30:47.321321   47427 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:30:47.323270   47427 out.go:177] * Starting "kubernetes-upgrade-574044" primary control-plane node in "kubernetes-upgrade-574044" cluster
	I0719 15:30:47.324709   47427 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:30:47.324754   47427 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 15:30:47.324765   47427 cache.go:56] Caching tarball of preloaded images
	I0719 15:30:47.324862   47427 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:30:47.324878   47427 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 15:30:47.325277   47427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/config.json ...
	I0719 15:30:47.325303   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/config.json: {Name:mk29f0b69c67876c599f26d41ab720a6d64ab3a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:30:47.325462   47427 start.go:360] acquireMachinesLock for kubernetes-upgrade-574044: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:30:47.325511   47427 start.go:364] duration metric: took 25.614µs to acquireMachinesLock for "kubernetes-upgrade-574044"
	I0719 15:30:47.325530   47427 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-574044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-574044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:30:47.325588   47427 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 15:30:47.327844   47427 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 15:30:47.327966   47427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:30:47.327997   47427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:30:47.342508   47427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36245
	I0719 15:30:47.342860   47427 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:30:47.343341   47427 main.go:141] libmachine: Using API Version  1
	I0719 15:30:47.343359   47427 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:30:47.343739   47427 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:30:47.343942   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetMachineName
	I0719 15:30:47.344159   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:30:47.344324   47427 start.go:159] libmachine.API.Create for "kubernetes-upgrade-574044" (driver="kvm2")
	I0719 15:30:47.344351   47427 client.go:168] LocalClient.Create starting
	I0719 15:30:47.344389   47427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 15:30:47.344425   47427 main.go:141] libmachine: Decoding PEM data...
	I0719 15:30:47.344439   47427 main.go:141] libmachine: Parsing certificate...
	I0719 15:30:47.344484   47427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 15:30:47.344500   47427 main.go:141] libmachine: Decoding PEM data...
	I0719 15:30:47.344514   47427 main.go:141] libmachine: Parsing certificate...
	I0719 15:30:47.344537   47427 main.go:141] libmachine: Running pre-create checks...
	I0719 15:30:47.344548   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .PreCreateCheck
	I0719 15:30:47.344864   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetConfigRaw
	I0719 15:30:47.345242   47427 main.go:141] libmachine: Creating machine...
	I0719 15:30:47.345255   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .Create
	I0719 15:30:47.345370   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Creating KVM machine...
	I0719 15:30:47.346579   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found existing default KVM network
	I0719 15:30:47.347241   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:47.347125   47466 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015aa0}
	I0719 15:30:47.347270   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | created network xml: 
	I0719 15:30:47.347282   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | <network>
	I0719 15:30:47.347296   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |   <name>mk-kubernetes-upgrade-574044</name>
	I0719 15:30:47.347306   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |   <dns enable='no'/>
	I0719 15:30:47.347310   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |   
	I0719 15:30:47.347318   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 15:30:47.347322   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |     <dhcp>
	I0719 15:30:47.347331   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 15:30:47.347335   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |     </dhcp>
	I0719 15:30:47.347341   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |   </ip>
	I0719 15:30:47.347349   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG |   
	I0719 15:30:47.347361   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | </network>
	I0719 15:30:47.347372   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | 
	I0719 15:30:47.352380   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | trying to create private KVM network mk-kubernetes-upgrade-574044 192.168.39.0/24...
	I0719 15:30:47.418951   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | private KVM network mk-kubernetes-upgrade-574044 192.168.39.0/24 created
	I0719 15:30:47.419018   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:47.418916   47466 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:30:47.419056   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044 ...
	I0719 15:30:47.419080   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 15:30:47.419160   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 15:30:47.660376   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:47.660219   47466 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa...
	I0719 15:30:47.925763   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:47.925632   47466 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/kubernetes-upgrade-574044.rawdisk...
	I0719 15:30:47.925797   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Writing magic tar header
	I0719 15:30:47.925813   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Writing SSH key tar header
	I0719 15:30:47.925824   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:47.925741   47466 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044 ...
	I0719 15:30:47.925841   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044
	I0719 15:30:47.925865   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 15:30:47.925894   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:30:47.925911   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044 (perms=drwx------)
	I0719 15:30:47.925926   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 15:30:47.925940   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 15:30:47.925951   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 15:30:47.925967   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 15:30:47.925980   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home/jenkins
	I0719 15:30:47.925997   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Checking permissions on dir: /home
	I0719 15:30:47.926009   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Skipping /home - not owner
	I0719 15:30:47.926023   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 15:30:47.926058   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 15:30:47.926083   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 15:30:47.926096   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Creating domain...
	I0719 15:30:47.927044   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) define libvirt domain using xml: 
	I0719 15:30:47.927067   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) <domain type='kvm'>
	I0719 15:30:47.927079   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <name>kubernetes-upgrade-574044</name>
	I0719 15:30:47.927087   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <memory unit='MiB'>2200</memory>
	I0719 15:30:47.927095   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <vcpu>2</vcpu>
	I0719 15:30:47.927102   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <features>
	I0719 15:30:47.927111   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <acpi/>
	I0719 15:30:47.927119   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <apic/>
	I0719 15:30:47.927131   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <pae/>
	I0719 15:30:47.927149   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     
	I0719 15:30:47.927160   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   </features>
	I0719 15:30:47.927178   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <cpu mode='host-passthrough'>
	I0719 15:30:47.927189   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   
	I0719 15:30:47.927204   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   </cpu>
	I0719 15:30:47.927216   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <os>
	I0719 15:30:47.927226   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <type>hvm</type>
	I0719 15:30:47.927235   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <boot dev='cdrom'/>
	I0719 15:30:47.927245   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <boot dev='hd'/>
	I0719 15:30:47.927253   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <bootmenu enable='no'/>
	I0719 15:30:47.927260   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   </os>
	I0719 15:30:47.927280   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   <devices>
	I0719 15:30:47.927297   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <disk type='file' device='cdrom'>
	I0719 15:30:47.927326   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/boot2docker.iso'/>
	I0719 15:30:47.927343   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <target dev='hdc' bus='scsi'/>
	I0719 15:30:47.927352   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <readonly/>
	I0719 15:30:47.927357   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </disk>
	I0719 15:30:47.927363   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <disk type='file' device='disk'>
	I0719 15:30:47.927374   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 15:30:47.927392   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/kubernetes-upgrade-574044.rawdisk'/>
	I0719 15:30:47.927408   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <target dev='hda' bus='virtio'/>
	I0719 15:30:47.927426   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </disk>
	I0719 15:30:47.927442   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <interface type='network'>
	I0719 15:30:47.927453   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <source network='mk-kubernetes-upgrade-574044'/>
	I0719 15:30:47.927461   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <model type='virtio'/>
	I0719 15:30:47.927469   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </interface>
	I0719 15:30:47.927480   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <interface type='network'>
	I0719 15:30:47.927489   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <source network='default'/>
	I0719 15:30:47.927501   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <model type='virtio'/>
	I0719 15:30:47.927512   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </interface>
	I0719 15:30:47.927529   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <serial type='pty'>
	I0719 15:30:47.927543   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <target port='0'/>
	I0719 15:30:47.927554   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </serial>
	I0719 15:30:47.927565   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <console type='pty'>
	I0719 15:30:47.927578   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <target type='serial' port='0'/>
	I0719 15:30:47.927585   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </console>
	I0719 15:30:47.927597   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     <rng model='virtio'>
	I0719 15:30:47.927611   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)       <backend model='random'>/dev/random</backend>
	I0719 15:30:47.927621   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     </rng>
	I0719 15:30:47.927626   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     
	I0719 15:30:47.927636   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)     
	I0719 15:30:47.927646   47427 main.go:141] libmachine: (kubernetes-upgrade-574044)   </devices>
	I0719 15:30:47.927656   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) </domain>
	I0719 15:30:47.927666   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) 
	I0719 15:30:47.931772   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:02:78:4a in network default
	I0719 15:30:47.932349   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Ensuring networks are active...
	I0719 15:30:47.932385   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:47.933084   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Ensuring network default is active
	I0719 15:30:47.933425   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Ensuring network mk-kubernetes-upgrade-574044 is active
	I0719 15:30:47.933956   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Getting domain xml...
	I0719 15:30:47.934708   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Creating domain...
	I0719 15:30:49.240867   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Waiting to get IP...
	I0719 15:30:49.241718   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:49.242088   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:49.242120   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:49.242051   47466 retry.go:31] will retry after 309.885485ms: waiting for machine to come up
	I0719 15:30:49.553627   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:49.554071   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:49.554098   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:49.554033   47466 retry.go:31] will retry after 340.973907ms: waiting for machine to come up
	I0719 15:30:49.896380   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:49.896856   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:49.896883   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:49.896806   47466 retry.go:31] will retry after 471.689821ms: waiting for machine to come up
	I0719 15:30:50.370646   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:50.371130   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:50.371152   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:50.371079   47466 retry.go:31] will retry after 367.065243ms: waiting for machine to come up
	I0719 15:30:50.739293   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:50.739749   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:50.739773   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:50.739708   47466 retry.go:31] will retry after 762.270296ms: waiting for machine to come up
	I0719 15:30:51.503115   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:51.503540   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:51.503564   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:51.503511   47466 retry.go:31] will retry after 725.801485ms: waiting for machine to come up
	I0719 15:30:52.230375   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:52.230854   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:52.230889   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:52.230810   47466 retry.go:31] will retry after 1.135998349s: waiting for machine to come up
	I0719 15:30:53.367901   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:53.368357   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:53.368386   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:53.368307   47466 retry.go:31] will retry after 1.256836782s: waiting for machine to come up
	I0719 15:30:54.626589   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:54.626979   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:54.627041   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:54.626965   47466 retry.go:31] will retry after 1.266736267s: waiting for machine to come up
	I0719 15:30:55.895247   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:55.895787   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:55.895818   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:55.895731   47466 retry.go:31] will retry after 2.036321516s: waiting for machine to come up
	I0719 15:30:57.933231   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:30:57.933620   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:30:57.933650   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:30:57.933566   47466 retry.go:31] will retry after 2.41429669s: waiting for machine to come up
	I0719 15:31:00.351140   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:00.351476   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:31:00.351505   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:31:00.351441   47466 retry.go:31] will retry after 3.167747787s: waiting for machine to come up
	I0719 15:31:03.520976   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:03.521389   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:31:03.521415   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:31:03.521348   47466 retry.go:31] will retry after 4.292970946s: waiting for machine to come up
	I0719 15:31:07.816353   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:07.816733   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Found IP for machine: 192.168.39.87
	I0719 15:31:07.816768   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has current primary IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:07.816779   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Reserving static IP address...
	I0719 15:31:07.817107   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-574044", mac: "52:54:00:0a:cf:68", ip: "192.168.39.87"} in network mk-kubernetes-upgrade-574044
	I0719 15:31:07.891186   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Getting to WaitForSSH function...
	I0719 15:31:07.891209   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Reserved static IP address: 192.168.39.87
	I0719 15:31:07.891221   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Waiting for SSH to be available...
	I0719 15:31:07.894294   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:07.894707   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:07.894747   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:07.894899   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Using SSH client type: external
	I0719 15:31:07.894926   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa (-rw-------)
	I0719 15:31:07.894960   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.87 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:31:07.894989   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | About to run SSH command:
	I0719 15:31:07.895003   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | exit 0
	I0719 15:31:08.019197   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | SSH cmd err, output: <nil>: 
	I0719 15:31:08.019452   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) KVM machine creation complete!
	I0719 15:31:08.020006   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetConfigRaw
	I0719 15:31:08.020561   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:08.020785   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:08.020925   47427 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 15:31:08.020941   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetState
	I0719 15:31:08.022404   47427 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 15:31:08.022422   47427 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 15:31:08.022430   47427 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 15:31:08.022439   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.024951   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.025258   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.025281   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.025409   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:08.025635   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.025812   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.025959   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:08.026175   47427 main.go:141] libmachine: Using SSH client type: native
	I0719 15:31:08.026486   47427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 15:31:08.026503   47427 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 15:31:08.129912   47427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:31:08.129936   47427 main.go:141] libmachine: Detecting the provisioner...
	I0719 15:31:08.129958   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.132915   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.133307   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.133329   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.133508   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:08.133702   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.133866   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.134013   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:08.134190   47427 main.go:141] libmachine: Using SSH client type: native
	I0719 15:31:08.134406   47427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 15:31:08.134422   47427 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 15:31:08.239267   47427 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 15:31:08.239413   47427 main.go:141] libmachine: found compatible host: buildroot
	I0719 15:31:08.239433   47427 main.go:141] libmachine: Provisioning with buildroot...
	I0719 15:31:08.239449   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetMachineName
	I0719 15:31:08.239711   47427 buildroot.go:166] provisioning hostname "kubernetes-upgrade-574044"
	I0719 15:31:08.239740   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetMachineName
	I0719 15:31:08.239963   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.242741   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.243112   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.243146   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.243338   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:08.243512   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.243655   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.243853   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:08.243995   47427 main.go:141] libmachine: Using SSH client type: native
	I0719 15:31:08.244182   47427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 15:31:08.244199   47427 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-574044 && echo "kubernetes-upgrade-574044" | sudo tee /etc/hostname
	I0719 15:31:08.368693   47427 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-574044
	
	I0719 15:31:08.368749   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.371478   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.371870   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.371897   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.371996   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:08.372191   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.372382   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.372484   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:08.372658   47427 main.go:141] libmachine: Using SSH client type: native
	I0719 15:31:08.372876   47427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 15:31:08.372900   47427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-574044' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-574044/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-574044' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:31:08.492190   47427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:31:08.492220   47427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:31:08.492266   47427 buildroot.go:174] setting up certificates
	I0719 15:31:08.492279   47427 provision.go:84] configureAuth start
	I0719 15:31:08.492295   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetMachineName
	I0719 15:31:08.492618   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetIP
	I0719 15:31:08.495642   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.495987   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.496022   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.496150   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.498450   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.498723   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.498763   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.498930   47427 provision.go:143] copyHostCerts
	I0719 15:31:08.498999   47427 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:31:08.499013   47427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:31:08.499085   47427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:31:08.499219   47427 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:31:08.499232   47427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:31:08.499267   47427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:31:08.499358   47427 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:31:08.499368   47427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:31:08.499402   47427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:31:08.499481   47427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-574044 san=[127.0.0.1 192.168.39.87 kubernetes-upgrade-574044 localhost minikube]
	I0719 15:31:08.587041   47427 provision.go:177] copyRemoteCerts
	I0719 15:31:08.587109   47427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:31:08.587133   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.590091   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.590657   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.590688   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.590762   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:08.591001   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.591161   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:08.591348   47427 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa Username:docker}
	I0719 15:31:08.677308   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:31:08.706192   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0719 15:31:08.737144   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:31:08.767979   47427 provision.go:87] duration metric: took 275.68596ms to configureAuth
	I0719 15:31:08.768005   47427 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:31:08.768170   47427 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:31:08.768239   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:08.772107   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.772562   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:08.772635   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:08.772836   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:08.773084   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.773254   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:08.773386   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:08.773539   47427 main.go:141] libmachine: Using SSH client type: native
	I0719 15:31:08.773732   47427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 15:31:08.773755   47427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:31:09.047632   47427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:31:09.047663   47427 main.go:141] libmachine: Checking connection to Docker...
	I0719 15:31:09.047675   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetURL
	I0719 15:31:09.048985   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Using libvirt version 6000000
	I0719 15:31:09.051242   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.051574   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.051594   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.051859   47427 main.go:141] libmachine: Docker is up and running!
	I0719 15:31:09.051877   47427 main.go:141] libmachine: Reticulating splines...
	I0719 15:31:09.051884   47427 client.go:171] duration metric: took 21.70752625s to LocalClient.Create
	I0719 15:31:09.051910   47427 start.go:167] duration metric: took 21.707585458s to libmachine.API.Create "kubernetes-upgrade-574044"
	I0719 15:31:09.051923   47427 start.go:293] postStartSetup for "kubernetes-upgrade-574044" (driver="kvm2")
	I0719 15:31:09.051938   47427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:31:09.051977   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:09.052304   47427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:31:09.052337   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:09.054849   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.055126   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.055167   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.055304   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:09.055470   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:09.055690   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:09.055860   47427 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa Username:docker}
	I0719 15:31:09.137249   47427 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:31:09.141558   47427 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:31:09.141580   47427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:31:09.141641   47427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:31:09.141708   47427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:31:09.141788   47427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:31:09.151328   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:31:09.175563   47427 start.go:296] duration metric: took 123.622995ms for postStartSetup
	I0719 15:31:09.175612   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetConfigRaw
	I0719 15:31:09.176203   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetIP
	I0719 15:31:09.179143   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.179441   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.179470   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.179768   47427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/config.json ...
	I0719 15:31:09.179958   47427 start.go:128] duration metric: took 21.854362008s to createHost
	I0719 15:31:09.179981   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:09.182617   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.183008   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.183139   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.183178   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:09.183398   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:09.183574   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:09.183757   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:09.183939   47427 main.go:141] libmachine: Using SSH client type: native
	I0719 15:31:09.184150   47427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.87 22 <nil> <nil>}
	I0719 15:31:09.184164   47427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 15:31:09.291560   47427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721403069.265032887
	
	I0719 15:31:09.291608   47427 fix.go:216] guest clock: 1721403069.265032887
	I0719 15:31:09.291619   47427 fix.go:229] Guest: 2024-07-19 15:31:09.265032887 +0000 UTC Remote: 2024-07-19 15:31:09.179970007 +0000 UTC m=+21.970569890 (delta=85.06288ms)
	I0719 15:31:09.291662   47427 fix.go:200] guest clock delta is within tolerance: 85.06288ms
	I0719 15:31:09.291674   47427 start.go:83] releasing machines lock for "kubernetes-upgrade-574044", held for 21.966149498s
	I0719 15:31:09.291719   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:09.292057   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetIP
	I0719 15:31:09.295122   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.295581   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.295615   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.295747   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:09.296264   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:09.296424   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:31:09.296514   47427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:31:09.296570   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:09.296667   47427 ssh_runner.go:195] Run: cat /version.json
	I0719 15:31:09.296721   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:31:09.299139   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.299281   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.299521   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.299551   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.299663   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:09.299712   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:09.299865   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:09.299878   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:31:09.300063   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:09.300066   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:31:09.300234   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:09.300246   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:31:09.300386   47427 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa Username:docker}
	I0719 15:31:09.300393   47427 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa Username:docker}
	I0719 15:31:09.404895   47427 ssh_runner.go:195] Run: systemctl --version
	I0719 15:31:09.411593   47427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:31:09.583914   47427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:31:09.590960   47427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:31:09.591049   47427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:31:09.609949   47427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:31:09.609981   47427 start.go:495] detecting cgroup driver to use...
	I0719 15:31:09.610058   47427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:31:09.628417   47427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:31:09.643636   47427 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:31:09.643695   47427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:31:09.661632   47427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:31:09.677905   47427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:31:09.802353   47427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:31:09.959095   47427 docker.go:233] disabling docker service ...
	I0719 15:31:09.959167   47427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:31:09.976682   47427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:31:09.992037   47427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:31:10.154321   47427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:31:10.294786   47427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:31:10.309045   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:31:10.330824   47427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:31:10.330903   47427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:31:10.345172   47427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:31:10.345250   47427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:31:10.360120   47427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:31:10.373631   47427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:31:10.386056   47427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:31:10.397602   47427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:31:10.410028   47427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:31:10.410088   47427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:31:10.425123   47427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:31:10.437101   47427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:31:10.572762   47427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:31:10.728559   47427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:31:10.728632   47427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:31:10.735258   47427 start.go:563] Will wait 60s for crictl version
	I0719 15:31:10.735323   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:10.740882   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:31:10.792966   47427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:31:10.793063   47427 ssh_runner.go:195] Run: crio --version
	I0719 15:31:10.825147   47427 ssh_runner.go:195] Run: crio --version
	I0719 15:31:10.860720   47427 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:31:10.861945   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetIP
	I0719 15:31:10.865005   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:10.865384   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:31:01 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:31:10.865424   47427 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:31:10.865816   47427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:31:10.870326   47427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:31:10.883975   47427 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-574044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-574044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:31:10.884086   47427 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:31:10.884146   47427 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:31:10.922650   47427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:31:10.922708   47427 ssh_runner.go:195] Run: which lz4
	I0719 15:31:10.926743   47427 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 15:31:10.931330   47427 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:31:10.931364   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:31:12.807483   47427 crio.go:462] duration metric: took 1.880771222s to copy over tarball
	I0719 15:31:12.807565   47427 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:31:15.448050   47427 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.640454583s)
	I0719 15:31:15.448080   47427 crio.go:469] duration metric: took 2.640570104s to extract the tarball
	I0719 15:31:15.448089   47427 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:31:15.489851   47427 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:31:15.543906   47427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:31:15.543933   47427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:31:15.544016   47427 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:31:15.544034   47427 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:31:15.544032   47427 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:31:15.544064   47427 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:31:15.544072   47427 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:31:15.544093   47427 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:31:15.544110   47427 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:31:15.544109   47427 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:31:15.545878   47427 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:31:15.545920   47427 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:31:15.545923   47427 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:31:15.545921   47427 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:31:15.545978   47427 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:31:15.546006   47427 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:31:15.546020   47427 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:31:15.546040   47427 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:31:15.746824   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:31:15.791263   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:31:15.799741   47427 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:31:15.799777   47427 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:31:15.799810   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:15.846755   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:31:15.846827   47427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:31:15.846867   47427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:31:15.846910   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:15.864023   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:31:15.885226   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:31:15.885322   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:31:15.899100   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:31:15.910689   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:31:15.915072   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:31:15.928365   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:31:15.954056   47427 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:31:15.954105   47427 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:31:15.954156   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:15.954155   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:31:16.025728   47427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:31:16.025801   47427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:31:16.025873   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:16.034415   47427 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:31:16.034458   47427 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:31:16.034506   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:16.043591   47427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:31:16.043640   47427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:31:16.043646   47427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:31:16.043669   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:31:16.043675   47427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:31:16.043691   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:16.043705   47427 ssh_runner.go:195] Run: which crictl
	I0719 15:31:16.043731   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:31:16.043765   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:31:16.121674   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:31:16.121738   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:31:16.121810   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:31:16.121886   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:31:16.121893   47427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:31:16.156180   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:31:16.166829   47427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:31:16.448292   47427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:31:16.592157   47427 cache_images.go:92] duration metric: took 1.048205742s to LoadCachedImages
	W0719 15:31:16.592250   47427 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:31:16.592270   47427 kubeadm.go:934] updating node { 192.168.39.87 8443 v1.20.0 crio true true} ...
	I0719 15:31:16.592390   47427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-574044 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-574044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:31:16.592471   47427 ssh_runner.go:195] Run: crio config
	I0719 15:31:16.640579   47427 cni.go:84] Creating CNI manager for ""
	I0719 15:31:16.640603   47427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:31:16.640615   47427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:31:16.640633   47427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.87 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-574044 NodeName:kubernetes-upgrade-574044 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.87"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.87 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:31:16.640771   47427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.87
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-574044"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.87
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.87"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:31:16.640827   47427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:31:16.651266   47427 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:31:16.651339   47427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:31:16.661405   47427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0719 15:31:16.679253   47427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:31:16.695403   47427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:31:16.712741   47427 ssh_runner.go:195] Run: grep 192.168.39.87	control-plane.minikube.internal$ /etc/hosts
	I0719 15:31:16.716829   47427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.87	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:31:16.729883   47427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:31:16.865955   47427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:31:16.888148   47427 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044 for IP: 192.168.39.87
	I0719 15:31:16.888166   47427 certs.go:194] generating shared ca certs ...
	I0719 15:31:16.888180   47427 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:16.888346   47427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:31:16.888395   47427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:31:16.888404   47427 certs.go:256] generating profile certs ...
	I0719 15:31:16.888472   47427 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.key
	I0719 15:31:16.888488   47427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.crt with IP's: []
	I0719 15:31:17.041204   47427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.crt ...
	I0719 15:31:17.041231   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.crt: {Name:mkfb10e38659a9e2d3ad5033923bd614d5a7d867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:17.041408   47427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.key ...
	I0719 15:31:17.041426   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.key: {Name:mk41fdea60154794ae555ee077e6ad02de2cc16a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:17.041550   47427 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.key.5b0fc992
	I0719 15:31:17.041580   47427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.crt.5b0fc992 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.87]
	I0719 15:31:17.357666   47427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.crt.5b0fc992 ...
	I0719 15:31:17.357693   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.crt.5b0fc992: {Name:mk87d0c87bef12c87df727745404d810ca89d4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:17.357880   47427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.key.5b0fc992 ...
	I0719 15:31:17.357900   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.key.5b0fc992: {Name:mk83c2221e4b5bf71ffe4e08599d563a24531207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:17.358011   47427 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.crt.5b0fc992 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.crt
	I0719 15:31:17.358110   47427 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.key.5b0fc992 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.key
	I0719 15:31:17.358180   47427 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.key
	I0719 15:31:17.358204   47427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.crt with IP's: []
	I0719 15:31:17.665919   47427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.crt ...
	I0719 15:31:17.665947   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.crt: {Name:mkcf3df795c55a3ffe7bac4ba97a0eae23791b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:17.666099   47427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.key ...
	I0719 15:31:17.666111   47427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.key: {Name:mkf7956a32039f571494cc2d8d2479cf83643de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:31:17.666280   47427 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:31:17.666318   47427 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:31:17.666327   47427 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:31:17.666346   47427 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:31:17.666369   47427 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:31:17.666395   47427 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:31:17.666433   47427 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:31:17.666965   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:31:17.694424   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:31:17.721817   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:31:17.746605   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:31:17.778222   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 15:31:17.812083   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:31:17.835852   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:31:17.859481   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:31:17.882723   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:31:17.905914   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:31:17.935437   47427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:31:17.959097   47427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:31:17.975844   47427 ssh_runner.go:195] Run: openssl version
	I0719 15:31:17.981855   47427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:31:17.992599   47427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:31:17.997305   47427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:31:17.997359   47427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:31:18.002993   47427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:31:18.013462   47427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:31:18.023844   47427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:31:18.028633   47427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:31:18.028676   47427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:31:18.034243   47427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:31:18.044688   47427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:31:18.055295   47427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:31:18.059811   47427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:31:18.059866   47427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:31:18.065446   47427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:31:18.075998   47427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:31:18.080278   47427 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 15:31:18.080340   47427 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-574044 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-574044 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:31:18.080423   47427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:31:18.080508   47427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:31:18.125445   47427 cri.go:89] found id: ""
	I0719 15:31:18.125518   47427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:31:18.135651   47427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:31:18.145308   47427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:31:18.154829   47427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:31:18.154850   47427 kubeadm.go:157] found existing configuration files:
	
	I0719 15:31:18.154894   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:31:18.166368   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:31:18.166437   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:31:18.176640   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:31:18.186050   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:31:18.186109   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:31:18.195828   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:31:18.205169   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:31:18.205229   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:31:18.215008   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:31:18.224827   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:31:18.224879   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:31:18.234574   47427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:31:18.371419   47427 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:31:18.371480   47427 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:31:18.544802   47427 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:31:18.544968   47427 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:31:18.545129   47427 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:31:18.722785   47427 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:31:18.725396   47427 out.go:204]   - Generating certificates and keys ...
	I0719 15:31:18.725496   47427 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:31:18.725601   47427 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:31:18.914428   47427 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:31:19.242339   47427 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:31:19.456977   47427 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:31:19.680483   47427 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 15:31:19.876335   47427 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 15:31:19.876608   47427 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-574044 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	I0719 15:31:20.241895   47427 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 15:31:20.242403   47427 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-574044 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	I0719 15:31:20.421145   47427 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:31:20.519403   47427 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:31:20.762678   47427 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 15:31:20.763185   47427 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:31:20.893212   47427 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:31:20.971877   47427 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:31:21.130413   47427 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:31:21.264570   47427 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:31:21.285653   47427 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:31:21.286823   47427 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:31:21.287562   47427 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:31:21.415101   47427 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:31:21.417121   47427 out.go:204]   - Booting up control plane ...
	I0719 15:31:21.417261   47427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:31:21.430949   47427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:31:21.432257   47427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:31:21.433086   47427 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:31:21.437190   47427 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:32:01.429828   47427 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:32:01.430162   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:32:01.430882   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:32:06.431668   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:32:06.431931   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:32:16.431288   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:32:16.431600   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:32:36.430762   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:32:36.431061   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:33:16.433298   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:33:16.433590   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:33:16.433610   47427 kubeadm.go:310] 
	I0719 15:33:16.433653   47427 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:33:16.433716   47427 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:33:16.433743   47427 kubeadm.go:310] 
	I0719 15:33:16.433802   47427 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:33:16.434025   47427 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:33:16.434201   47427 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:33:16.434263   47427 kubeadm.go:310] 
	I0719 15:33:16.434423   47427 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:33:16.434487   47427 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:33:16.434543   47427 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:33:16.434554   47427 kubeadm.go:310] 
	I0719 15:33:16.434792   47427 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:33:16.434915   47427 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:33:16.434923   47427 kubeadm.go:310] 
	I0719 15:33:16.435020   47427 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:33:16.435132   47427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:33:16.435224   47427 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:33:16.435390   47427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:33:16.435412   47427 kubeadm.go:310] 
	I0719 15:33:16.435494   47427 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:33:16.435588   47427 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:33:16.435688   47427 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:33:16.435790   47427 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-574044 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-574044 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-574044 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-574044 localhost] and IPs [192.168.39.87 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:33:16.435840   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:33:18.581430   47427 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.145562163s)
	I0719 15:33:18.581499   47427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:33:18.595719   47427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:33:18.605778   47427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:33:18.605803   47427 kubeadm.go:157] found existing configuration files:
	
	I0719 15:33:18.605861   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:33:18.619377   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:33:18.619441   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:33:18.632018   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:33:18.641858   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:33:18.641905   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:33:18.654008   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:33:18.664067   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:33:18.664112   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:33:18.676290   47427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:33:18.687234   47427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:33:18.687290   47427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:33:18.698282   47427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:33:18.784842   47427 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:33:18.784920   47427 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:33:18.943656   47427 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:33:18.943826   47427 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:33:18.943989   47427 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:33:19.129421   47427 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:33:19.131536   47427 out.go:204]   - Generating certificates and keys ...
	I0719 15:33:19.131640   47427 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:33:19.131727   47427 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:33:19.131836   47427 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:33:19.131958   47427 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:33:19.132054   47427 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:33:19.132126   47427 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:33:19.132206   47427 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:33:19.132475   47427 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:33:19.133147   47427 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:33:19.133961   47427 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:33:19.134304   47427 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:33:19.134400   47427 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:33:19.328078   47427 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:33:19.490912   47427 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:33:19.733846   47427 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:33:19.987266   47427 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:33:20.011427   47427 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:33:20.011595   47427 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:33:20.011677   47427 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:33:20.188653   47427 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:33:20.190853   47427 out.go:204]   - Booting up control plane ...
	I0719 15:33:20.190974   47427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:33:20.209975   47427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:33:20.211885   47427 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:33:20.213166   47427 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:33:20.216843   47427 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:34:00.219940   47427 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:34:00.220341   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:34:00.220571   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:34:05.221515   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:34:05.221776   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:34:15.222149   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:34:15.222397   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:34:35.221636   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:34:35.221819   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:35:15.222029   47427 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:35:15.222299   47427 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:35:15.222313   47427 kubeadm.go:310] 
	I0719 15:35:15.222363   47427 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:35:15.222418   47427 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:35:15.222426   47427 kubeadm.go:310] 
	I0719 15:35:15.222465   47427 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:35:15.222511   47427 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:35:15.222650   47427 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:35:15.222661   47427 kubeadm.go:310] 
	I0719 15:35:15.222789   47427 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:35:15.222831   47427 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:35:15.222876   47427 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:35:15.222884   47427 kubeadm.go:310] 
	I0719 15:35:15.223013   47427 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:35:15.223125   47427 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:35:15.223150   47427 kubeadm.go:310] 
	I0719 15:35:15.223291   47427 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:35:15.223403   47427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:35:15.223498   47427 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:35:15.223586   47427 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:35:15.223599   47427 kubeadm.go:310] 
	I0719 15:35:15.224693   47427 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:35:15.224807   47427 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:35:15.224890   47427 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:35:15.225021   47427 kubeadm.go:394] duration metric: took 3m57.144673756s to StartCluster
	I0719 15:35:15.225085   47427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:35:15.225147   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:35:15.299464   47427 cri.go:89] found id: ""
	I0719 15:35:15.299485   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.299491   47427 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:35:15.299497   47427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:35:15.299543   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:35:15.344139   47427 cri.go:89] found id: ""
	I0719 15:35:15.344164   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.344173   47427 logs.go:278] No container was found matching "etcd"
	I0719 15:35:15.344181   47427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:35:15.344241   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:35:15.383880   47427 cri.go:89] found id: ""
	I0719 15:35:15.383904   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.383911   47427 logs.go:278] No container was found matching "coredns"
	I0719 15:35:15.383916   47427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:35:15.383964   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:35:15.431614   47427 cri.go:89] found id: ""
	I0719 15:35:15.431641   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.431652   47427 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:35:15.431660   47427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:35:15.431720   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:35:15.477831   47427 cri.go:89] found id: ""
	I0719 15:35:15.477859   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.477870   47427 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:35:15.477878   47427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:35:15.477943   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:35:15.526411   47427 cri.go:89] found id: ""
	I0719 15:35:15.526438   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.526449   47427 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:35:15.526457   47427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:35:15.526519   47427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:35:15.587649   47427 cri.go:89] found id: ""
	I0719 15:35:15.587684   47427 logs.go:276] 0 containers: []
	W0719 15:35:15.587695   47427 logs.go:278] No container was found matching "kindnet"
	I0719 15:35:15.587705   47427 logs.go:123] Gathering logs for kubelet ...
	I0719 15:35:15.587720   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:35:15.665758   47427 logs.go:123] Gathering logs for dmesg ...
	I0719 15:35:15.665868   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:35:15.688344   47427 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:35:15.688427   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:35:15.864656   47427 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:35:15.864719   47427 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:35:15.864740   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:35:16.015199   47427 logs.go:123] Gathering logs for container status ...
	I0719 15:35:16.015261   47427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:35:16.115086   47427 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:35:16.115143   47427 out.go:239] * 
	* 
	W0719 15:35:16.115212   47427 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:35:16.115244   47427 out.go:239] * 
	* 
	W0719 15:35:16.116371   47427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:35:16.120137   47427 out.go:177] 
	W0719 15:35:16.121437   47427 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:35:16.121512   47427 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:35:16.121538   47427 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:35:16.123764   47427 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-574044
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-574044: (1.671392332s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-574044 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-574044 status --format={{.Host}}: exit status 7 (77.760255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.883066755s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-574044 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.081041ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-574044] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-574044
	    minikube start -p kubernetes-upgrade-574044 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5740442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-574044 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-574044 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.383548365s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-19 15:37:29.379100659 +0000 UTC m=+4624.174687180
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-574044 -n kubernetes-upgrade-574044
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-574044 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-574044 logs -n 25: (1.635667346s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-526259 sudo crio            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-526259                      | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:34 UTC |
	| start   | -p force-systemd-env-802753           | force-systemd-env-802753  | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-464954                       | pause-464954              | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-490845 sudo           | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-490845                | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:34 UTC |
	| start   | -p NoKubernetes-490845                | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-574044          | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| delete  | -p force-systemd-env-802753           | force-systemd-env-802753  | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p kubernetes-upgrade-574044          | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:36 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-632791          | force-systemd-flag-632791 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:36 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-490845 sudo           | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-490845                | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p cert-expiration-939600             | cert-expiration-939600    | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:36 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-464954                       | pause-464954              | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p cert-options-127438                | cert-options-127438       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-574044          | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-574044          | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:36 UTC | 19 Jul 24 15:37 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-632791 ssh cat     | force-systemd-flag-632791 | jenkins | v1.33.1 | 19 Jul 24 15:36 UTC | 19 Jul 24 15:36 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-632791          | force-systemd-flag-632791 | jenkins | v1.33.1 | 19 Jul 24 15:36 UTC | 19 Jul 24 15:36 UTC |
	| start   | -p old-k8s-version-862924             | old-k8s-version-862924    | jenkins | v1.33.1 | 19 Jul 24 15:36 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | cert-options-127438 ssh               | cert-options-127438       | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-127438 -- sudo        | cert-options-127438       | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-127438                | cert-options-127438       | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200    | no-preload-382231         | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:37:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:37:26.776372   55555 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:37:26.776699   55555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:37:26.776725   55555 out.go:304] Setting ErrFile to fd 2...
	I0719 15:37:26.776737   55555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:37:26.776936   55555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:37:26.777508   55555 out.go:298] Setting JSON to false
	I0719 15:37:26.778556   55555 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4793,"bootTime":1721398654,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:37:26.778639   55555 start.go:139] virtualization: kvm guest
	I0719 15:37:26.781013   55555 out.go:177] * [no-preload-382231] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:37:26.782347   55555 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:37:26.782375   55555 notify.go:220] Checking for updates...
	I0719 15:37:26.784755   55555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:37:26.785992   55555 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:37:26.787141   55555 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:37:26.788238   55555 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:37:26.789288   55555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:37:26.790939   55555 config.go:182] Loaded profile config "cert-expiration-939600": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:37:26.791083   55555 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:37:26.791223   55555 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:37:26.791324   55555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:37:26.841678   55555 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 15:37:26.843108   55555 start.go:297] selected driver: kvm2
	I0719 15:37:26.843130   55555 start.go:901] validating driver "kvm2" against <nil>
	I0719 15:37:26.843145   55555 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:37:26.844188   55555 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:37:26.844279   55555 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:37:26.863440   55555 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:37:26.863487   55555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 15:37:26.863773   55555 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:37:26.863805   55555 cni.go:84] Creating CNI manager for ""
	I0719 15:37:26.863813   55555 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:37:26.863827   55555 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:37:26.863876   55555 start.go:340] cluster config:
	{Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:37:26.863996   55555 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:37:26.865402   55555 out.go:177] * Starting "no-preload-382231" primary control-plane node in "no-preload-382231" cluster
	I0719 15:37:26.055420   54352 api_server.go:279] https://192.168.39.87:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:37:26.055451   54352 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:37:26.303718   54352 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0719 15:37:26.310523   54352 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:37:26.310555   54352 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:37:26.806326   54352 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0719 15:37:26.816499   54352 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:37:26.816527   54352 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:37:27.303949   54352 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0719 15:37:27.328502   54352 api_server.go:279] https://192.168.39.87:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:37:27.328542   54352 api_server.go:103] status: https://192.168.39.87:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:37:27.804092   54352 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0719 15:37:27.809116   54352 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0719 15:37:27.815903   54352 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:37:27.815930   54352 api_server.go:131] duration metric: took 5.012658705s to wait for apiserver health ...
	I0719 15:37:27.815940   54352 cni.go:84] Creating CNI manager for ""
	I0719 15:37:27.815948   54352 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:37:27.817848   54352 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:37:24.243556   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:24.244181   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:24.244211   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:24.244123   55211 retry.go:31] will retry after 1.942652305s: waiting for machine to come up
	I0719 15:37:26.560667   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:26.561409   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:26.561441   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:26.561361   55211 retry.go:31] will retry after 2.724602942s: waiting for machine to come up
	I0719 15:37:27.819003   54352 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:37:27.830050   54352 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:37:27.848576   54352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:37:27.860724   54352 system_pods.go:59] 8 kube-system pods found
	I0719 15:37:27.860761   54352 system_pods.go:61] "coredns-5cfdc65f69-sftk4" [b39bb3ec-2ec5-4117-80d0-9ab63dd55554] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:37:27.860771   54352 system_pods.go:61] "coredns-5cfdc65f69-zcrt5" [9771e635-26f5-4e02-92d8-74a0a1c86ce2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:37:27.860779   54352 system_pods.go:61] "etcd-kubernetes-upgrade-574044" [f8c83646-9da1-4bc3-a40c-13c0751ccd89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:37:27.860789   54352 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-574044" [11876a55-a785-45eb-9dd5-d05902661dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:37:27.860800   54352 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-574044" [b153041f-64c5-400c-9f0d-3130ca02a990] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:37:27.860814   54352 system_pods.go:61] "kube-proxy-f2p4s" [66c38e23-d3d8-4453-9081-43b6ba85c5f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:37:27.860826   54352 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-574044" [ac3e00cf-e11d-4990-880a-294e18882d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:37:27.860838   54352 system_pods.go:61] "storage-provisioner" [5c3552aa-ed67-407a-acb9-cfd884c23880] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:37:27.860850   54352 system_pods.go:74] duration metric: took 12.255411ms to wait for pod list to return data ...
	I0719 15:37:27.860863   54352 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:37:27.865434   54352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:37:27.865461   54352 node_conditions.go:123] node cpu capacity is 2
	I0719 15:37:27.865474   54352 node_conditions.go:105] duration metric: took 4.602797ms to run NodePressure ...
	I0719 15:37:27.865498   54352 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:37:28.182564   54352 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:37:28.196970   54352 ops.go:34] apiserver oom_adj: -16
	I0719 15:37:28.196994   54352 kubeadm.go:597] duration metric: took 8.357582997s to restartPrimaryControlPlane
	I0719 15:37:28.197004   54352 kubeadm.go:394] duration metric: took 8.52102861s to StartCluster
	I0719 15:37:28.197025   54352 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:28.197103   54352 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:37:28.198008   54352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:28.198314   54352 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.87 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:37:28.198364   54352 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:37:28.198440   54352 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-574044"
	I0719 15:37:28.198451   54352 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-574044"
	I0719 15:37:28.198469   54352 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-574044"
	W0719 15:37:28.198478   54352 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:37:28.198491   54352 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-574044"
	I0719 15:37:28.198516   54352 host.go:66] Checking if "kubernetes-upgrade-574044" exists ...
	I0719 15:37:28.198524   54352 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:37:28.198890   54352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:37:28.198919   54352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:37:28.198919   54352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:37:28.198940   54352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:37:28.200000   54352 out.go:177] * Verifying Kubernetes components...
	I0719 15:37:28.201512   54352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:37:28.213905   54352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0719 15:37:28.214375   54352 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:37:28.214473   54352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45335
	I0719 15:37:28.214875   54352 main.go:141] libmachine: Using API Version  1
	I0719 15:37:28.214892   54352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:37:28.214946   54352 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:37:28.215236   54352 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:37:28.215434   54352 main.go:141] libmachine: Using API Version  1
	I0719 15:37:28.215448   54352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:37:28.215458   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetState
	I0719 15:37:28.215773   54352 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:37:28.216348   54352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:37:28.216380   54352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:37:28.219081   54352 kapi.go:59] client config for kubernetes-upgrade-574044: &rest.Config{Host:"https://192.168.39.87:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/profiles/kubernetes-upgrade-574044/client.key", CAFile:"/home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 15:37:28.219564   54352 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-574044"
	W0719 15:37:28.219586   54352 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:37:28.219614   54352 host.go:66] Checking if "kubernetes-upgrade-574044" exists ...
	I0719 15:37:28.219983   54352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:37:28.220017   54352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:37:28.230988   54352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0719 15:37:28.231433   54352 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:37:28.231871   54352 main.go:141] libmachine: Using API Version  1
	I0719 15:37:28.231894   54352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:37:28.232375   54352 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:37:28.232554   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetState
	I0719 15:37:28.234256   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:37:28.234884   54352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0719 15:37:28.235315   54352 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:37:28.235817   54352 main.go:141] libmachine: Using API Version  1
	I0719 15:37:28.235848   54352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:37:28.236185   54352 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:37:28.236346   54352 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:37:28.236920   54352 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:37:28.236950   54352 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:37:28.237739   54352 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:37:28.237757   54352 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:37:28.237780   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:37:28.240859   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:37:28.241344   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:35:36 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:37:28.241374   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:37:28.241491   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:37:28.241690   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:37:28.241860   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:37:28.242013   54352 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa Username:docker}
	I0719 15:37:28.251898   54352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45695
	I0719 15:37:28.252305   54352 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:37:28.252728   54352 main.go:141] libmachine: Using API Version  1
	I0719 15:37:28.252746   54352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:37:28.253143   54352 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:37:28.253312   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetState
	I0719 15:37:28.254738   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .DriverName
	I0719 15:37:28.254943   54352 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:37:28.254961   54352 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:37:28.254980   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHHostname
	I0719 15:37:28.257563   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:37:28.257976   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:cf:68", ip: ""} in network mk-kubernetes-upgrade-574044: {Iface:virbr1 ExpiryTime:2024-07-19 16:35:36 +0000 UTC Type:0 Mac:52:54:00:0a:cf:68 Iaid: IPaddr:192.168.39.87 Prefix:24 Hostname:kubernetes-upgrade-574044 Clientid:01:52:54:00:0a:cf:68}
	I0719 15:37:28.257998   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined IP address 192.168.39.87 and MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:37:28.258176   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHPort
	I0719 15:37:28.258390   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHKeyPath
	I0719 15:37:28.258524   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .GetSSHUsername
	I0719 15:37:28.258660   54352 sshutil.go:53] new ssh client: &{IP:192.168.39.87 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/kubernetes-upgrade-574044/id_rsa Username:docker}
	I0719 15:37:28.400399   54352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:37:28.424659   54352 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:37:28.424752   54352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:37:28.441477   54352 api_server.go:72] duration metric: took 243.126179ms to wait for apiserver process to appear ...
	I0719 15:37:28.441499   54352 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:37:28.441517   54352 api_server.go:253] Checking apiserver healthz at https://192.168.39.87:8443/healthz ...
	I0719 15:37:28.447867   54352 api_server.go:279] https://192.168.39.87:8443/healthz returned 200:
	ok
	I0719 15:37:28.448792   54352 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:37:28.448814   54352 api_server.go:131] duration metric: took 7.308438ms to wait for apiserver health ...
	I0719 15:37:28.448823   54352 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:37:28.455291   54352 system_pods.go:59] 8 kube-system pods found
	I0719 15:37:28.455329   54352 system_pods.go:61] "coredns-5cfdc65f69-sftk4" [b39bb3ec-2ec5-4117-80d0-9ab63dd55554] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:37:28.455358   54352 system_pods.go:61] "coredns-5cfdc65f69-zcrt5" [9771e635-26f5-4e02-92d8-74a0a1c86ce2] Running
	I0719 15:37:28.455370   54352 system_pods.go:61] "etcd-kubernetes-upgrade-574044" [f8c83646-9da1-4bc3-a40c-13c0751ccd89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:37:28.455378   54352 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-574044" [11876a55-a785-45eb-9dd5-d05902661dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:37:28.455388   54352 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-574044" [b153041f-64c5-400c-9f0d-3130ca02a990] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:37:28.455398   54352 system_pods.go:61] "kube-proxy-f2p4s" [66c38e23-d3d8-4453-9081-43b6ba85c5f7] Running
	I0719 15:37:28.455408   54352 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-574044" [ac3e00cf-e11d-4990-880a-294e18882d44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:37:28.455430   54352 system_pods.go:61] "storage-provisioner" [5c3552aa-ed67-407a-acb9-cfd884c23880] Running
	I0719 15:37:28.455437   54352 system_pods.go:74] duration metric: took 6.607735ms to wait for pod list to return data ...
	I0719 15:37:28.455449   54352 kubeadm.go:582] duration metric: took 257.101167ms to wait for: map[apiserver:true system_pods:true]
	I0719 15:37:28.455475   54352 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:37:28.462071   54352 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:37:28.462092   54352 node_conditions.go:123] node cpu capacity is 2
	I0719 15:37:28.462102   54352 node_conditions.go:105] duration metric: took 6.621053ms to run NodePressure ...
	I0719 15:37:28.462116   54352 start.go:241] waiting for startup goroutines ...
	I0719 15:37:28.496391   54352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:37:28.549596   54352 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:37:29.305272   54352 main.go:141] libmachine: Making call to close driver server
	I0719 15:37:29.305298   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .Close
	I0719 15:37:29.305313   54352 main.go:141] libmachine: Making call to close driver server
	I0719 15:37:29.305328   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .Close
	I0719 15:37:29.305606   54352 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:37:29.305621   54352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:37:29.305630   54352 main.go:141] libmachine: Making call to close driver server
	I0719 15:37:29.305638   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .Close
	I0719 15:37:29.305724   54352 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:37:29.305735   54352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:37:29.305744   54352 main.go:141] libmachine: Making call to close driver server
	I0719 15:37:29.305752   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .Close
	I0719 15:37:29.305990   54352 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:37:29.305995   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Closing plugin on server side
	I0719 15:37:29.306005   54352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:37:29.306005   54352 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:37:29.306017   54352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:37:29.311961   54352 main.go:141] libmachine: Making call to close driver server
	I0719 15:37:29.311981   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) Calling .Close
	I0719 15:37:29.312215   54352 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:37:29.312239   54352 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:37:29.312244   54352 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | Closing plugin on server side
	I0719 15:37:29.314318   54352 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 15:37:29.315902   54352 addons.go:510] duration metric: took 1.117549147s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 15:37:29.315942   54352 start.go:246] waiting for cluster config update ...
	I0719 15:37:29.315956   54352 start.go:255] writing updated cluster config ...
	I0719 15:37:29.316170   54352 ssh_runner.go:195] Run: rm -f paused
	I0719 15:37:29.364612   54352 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:37:29.366581   54352 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-574044" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.055144730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403450055119126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c511acd-bbbf-4f62-96c6-06626d175aef name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.055950184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcf8d7ad-9f75-4b88-9720-c75e9b506ed5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.056023990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcf8d7ad-9f75-4b88-9720-c75e9b506ed5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.058289356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20df38e4357672672af57b5f6f3e530050702b0318396441c6b9dfe273b686c6,PodSandboxId:6bc190ddc2b3ca2333bbaaa62b7e0d838ec35f3b7b3c12bba72bb62981fd2580,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447085191172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11de784c5dec3370e0c6c5a8284cbb1f41c5693b2788b8f6a18e90744e4dd2fb,PodSandboxId:7f9e306a38fc436a75b7ac32fe5937b4ea6404f7fc175d3e56c1cb5d2c8038b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447022725420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3b3c2d66a70deee5bdbc7f5dbc1f63b0046cae10c9c622eb61f650786f976c2,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721403447123911128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dda0a43c09d6e00ba8e1e46384da529d1298a107ec8de1eb1ff766bcbad8877,PodSandboxId:ee0a684ea35ddeb8637836c83d7fef59d9758a70da0bd4c7debe7e2697885be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721403447064110961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0c782d2664131d539b0cf9d93fa60900b50d5b4326fa5c09da4b1cb6cc037,PodSandboxId:d15457b03acea7476fd52c707c79cb7a4c4af7c4503cbf77af0f9a565e6abecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721403442256069845,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f051a44d3e903efad10621622f34c07838db177eb4d62404a896f6936f2e224,PodSandboxId:f6cc8c28a6ae0330e96305cde9ea6fe96e45e3c6945a416ea1a4a30b06d151e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721403442251772734,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097a3d8ea1833e9abd87953a535b98b9035d723609cfb2b513ed4530fb23a06b,PodSandboxId:8a08b3b77a625651a0e9a94ece182853c3fcc2d3f9f0502d28b60ec863253c3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721403442271630403,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67c22c128ab81d102d7f57e2dcc0d12f6fe03565a0ba9290fc90595a48fe656,PodSandboxId:26dcd4530eb40bb1a8ac2cf3f9591ed701a9113d695148b6caf0b89f75741686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721403442235398290,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e1740b79e6c71090105697792c55f34ae8ca53f46bcfc6222d8aa17eb45304,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721403439197394998,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f88abd50b7ca6934c48a8ad098c4cdf51fb80245a6ce31fb61d4c660ab96070,PodSandboxId:51f2bd168a563a6ed90e2985045902f53192d021b56dd6d0f195a72a1892d7df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403436403910038,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc4c30ce1eebe67a3a38a45d1025bc2052578b5977e86ce57e5d375acd200f1,PodSandboxId:c461bf41993f6769e262cd96d46a869a977e7d3de7005a3b1a0f3f56a2896429,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721403434607229278,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa92adb9509a99426ec575126e6f44e6d2a78cdea241a82a26b5aeff64f4c297,PodSandboxId:b3c63fff87d882cf23ba0c4812a20044a6dc5cbbded7568ba35be54d6e40aa86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721403434871980006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4ae87b1b6ad1712f7cbb6c3c7403b9984491ab557d71f0f9039396cacaa1a,PodSandboxId:9349459c27ccd5de08e116bf0c46d7ef69070ef7dbd7e9c32d5f7cffce3ee819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9
a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721403434776487584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb70c7cbad8cadad0e4e9d11d1a35230083101fffd2fd3094ca66ada34552c3,PodSandboxId:5ceef0ea5709808b506f5e55fa479b15e166d45443994c10966014bbd2726ca7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b19
8de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721403434891350825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70930ba7a22eb92addb9fe0f8cfd5bc3b4a683fb4b37ada81396c29af0cec35b,PodSandboxId:ae43caa59ed913c8529d4a377d8825fd6ff1e0a18a4e483a040d17dd250ab2e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721403434558972507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc6cc903147989448904aa27e343e7e784bd9a127c5a140b34eb7008e289e41,PodSandboxId:f7740a4056135b002bee87c110b3868cc54e02e31ee7fa689a6ecb9bb42da1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403365956009889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcf8d7ad-9f75-4b88-9720-c75e9b506ed5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.121753189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6adf0d5-0c64-4e96-8fe2-f744c31f4ed6 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.121860002Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6adf0d5-0c64-4e96-8fe2-f744c31f4ed6 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.132525971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bff04f7-d01e-4646-bed0-6dd91c13a8dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.132887293Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403450132865205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bff04f7-d01e-4646-bed0-6dd91c13a8dd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.133697792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a5d1887-6ea6-4f57-a262-180adda4c16c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.133757026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a5d1887-6ea6-4f57-a262-180adda4c16c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.134132643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20df38e4357672672af57b5f6f3e530050702b0318396441c6b9dfe273b686c6,PodSandboxId:6bc190ddc2b3ca2333bbaaa62b7e0d838ec35f3b7b3c12bba72bb62981fd2580,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447085191172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11de784c5dec3370e0c6c5a8284cbb1f41c5693b2788b8f6a18e90744e4dd2fb,PodSandboxId:7f9e306a38fc436a75b7ac32fe5937b4ea6404f7fc175d3e56c1cb5d2c8038b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447022725420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3b3c2d66a70deee5bdbc7f5dbc1f63b0046cae10c9c622eb61f650786f976c2,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721403447123911128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dda0a43c09d6e00ba8e1e46384da529d1298a107ec8de1eb1ff766bcbad8877,PodSandboxId:ee0a684ea35ddeb8637836c83d7fef59d9758a70da0bd4c7debe7e2697885be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721403447064110961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0c782d2664131d539b0cf9d93fa60900b50d5b4326fa5c09da4b1cb6cc037,PodSandboxId:d15457b03acea7476fd52c707c79cb7a4c4af7c4503cbf77af0f9a565e6abecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721403442256069845,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f051a44d3e903efad10621622f34c07838db177eb4d62404a896f6936f2e224,PodSandboxId:f6cc8c28a6ae0330e96305cde9ea6fe96e45e3c6945a416ea1a4a30b06d151e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721403442251772734,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097a3d8ea1833e9abd87953a535b98b9035d723609cfb2b513ed4530fb23a06b,PodSandboxId:8a08b3b77a625651a0e9a94ece182853c3fcc2d3f9f0502d28b60ec863253c3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721403442271630403,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67c22c128ab81d102d7f57e2dcc0d12f6fe03565a0ba9290fc90595a48fe656,PodSandboxId:26dcd4530eb40bb1a8ac2cf3f9591ed701a9113d695148b6caf0b89f75741686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721403442235398290,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e1740b79e6c71090105697792c55f34ae8ca53f46bcfc6222d8aa17eb45304,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721403439197394998,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f88abd50b7ca6934c48a8ad098c4cdf51fb80245a6ce31fb61d4c660ab96070,PodSandboxId:51f2bd168a563a6ed90e2985045902f53192d021b56dd6d0f195a72a1892d7df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403436403910038,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc4c30ce1eebe67a3a38a45d1025bc2052578b5977e86ce57e5d375acd200f1,PodSandboxId:c461bf41993f6769e262cd96d46a869a977e7d3de7005a3b1a0f3f56a2896429,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721403434607229278,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa92adb9509a99426ec575126e6f44e6d2a78cdea241a82a26b5aeff64f4c297,PodSandboxId:b3c63fff87d882cf23ba0c4812a20044a6dc5cbbded7568ba35be54d6e40aa86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721403434871980006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4ae87b1b6ad1712f7cbb6c3c7403b9984491ab557d71f0f9039396cacaa1a,PodSandboxId:9349459c27ccd5de08e116bf0c46d7ef69070ef7dbd7e9c32d5f7cffce3ee819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9
a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721403434776487584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb70c7cbad8cadad0e4e9d11d1a35230083101fffd2fd3094ca66ada34552c3,PodSandboxId:5ceef0ea5709808b506f5e55fa479b15e166d45443994c10966014bbd2726ca7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b19
8de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721403434891350825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70930ba7a22eb92addb9fe0f8cfd5bc3b4a683fb4b37ada81396c29af0cec35b,PodSandboxId:ae43caa59ed913c8529d4a377d8825fd6ff1e0a18a4e483a040d17dd250ab2e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721403434558972507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc6cc903147989448904aa27e343e7e784bd9a127c5a140b34eb7008e289e41,PodSandboxId:f7740a4056135b002bee87c110b3868cc54e02e31ee7fa689a6ecb9bb42da1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403365956009889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a5d1887-6ea6-4f57-a262-180adda4c16c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.175788682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a80dc16b-3849-496f-a43c-856d059d44fe name=/runtime.v1.RuntimeService/Version
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.175878640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a80dc16b-3849-496f-a43c-856d059d44fe name=/runtime.v1.RuntimeService/Version
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.177200965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecd31c5b-9944-45d1-960e-c5698b0304b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.177755122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403450177730972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecd31c5b-9944-45d1-960e-c5698b0304b2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.178244214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec5631c7-aedc-41dd-a3c6-a3ac96708b99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.178355318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec5631c7-aedc-41dd-a3c6-a3ac96708b99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.178661162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20df38e4357672672af57b5f6f3e530050702b0318396441c6b9dfe273b686c6,PodSandboxId:6bc190ddc2b3ca2333bbaaa62b7e0d838ec35f3b7b3c12bba72bb62981fd2580,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447085191172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11de784c5dec3370e0c6c5a8284cbb1f41c5693b2788b8f6a18e90744e4dd2fb,PodSandboxId:7f9e306a38fc436a75b7ac32fe5937b4ea6404f7fc175d3e56c1cb5d2c8038b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447022725420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3b3c2d66a70deee5bdbc7f5dbc1f63b0046cae10c9c622eb61f650786f976c2,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721403447123911128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dda0a43c09d6e00ba8e1e46384da529d1298a107ec8de1eb1ff766bcbad8877,PodSandboxId:ee0a684ea35ddeb8637836c83d7fef59d9758a70da0bd4c7debe7e2697885be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721403447064110961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0c782d2664131d539b0cf9d93fa60900b50d5b4326fa5c09da4b1cb6cc037,PodSandboxId:d15457b03acea7476fd52c707c79cb7a4c4af7c4503cbf77af0f9a565e6abecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721403442256069845,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f051a44d3e903efad10621622f34c07838db177eb4d62404a896f6936f2e224,PodSandboxId:f6cc8c28a6ae0330e96305cde9ea6fe96e45e3c6945a416ea1a4a30b06d151e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721403442251772734,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097a3d8ea1833e9abd87953a535b98b9035d723609cfb2b513ed4530fb23a06b,PodSandboxId:8a08b3b77a625651a0e9a94ece182853c3fcc2d3f9f0502d28b60ec863253c3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721403442271630403,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67c22c128ab81d102d7f57e2dcc0d12f6fe03565a0ba9290fc90595a48fe656,PodSandboxId:26dcd4530eb40bb1a8ac2cf3f9591ed701a9113d695148b6caf0b89f75741686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721403442235398290,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e1740b79e6c71090105697792c55f34ae8ca53f46bcfc6222d8aa17eb45304,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721403439197394998,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f88abd50b7ca6934c48a8ad098c4cdf51fb80245a6ce31fb61d4c660ab96070,PodSandboxId:51f2bd168a563a6ed90e2985045902f53192d021b56dd6d0f195a72a1892d7df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403436403910038,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc4c30ce1eebe67a3a38a45d1025bc2052578b5977e86ce57e5d375acd200f1,PodSandboxId:c461bf41993f6769e262cd96d46a869a977e7d3de7005a3b1a0f3f56a2896429,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721403434607229278,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa92adb9509a99426ec575126e6f44e6d2a78cdea241a82a26b5aeff64f4c297,PodSandboxId:b3c63fff87d882cf23ba0c4812a20044a6dc5cbbded7568ba35be54d6e40aa86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721403434871980006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4ae87b1b6ad1712f7cbb6c3c7403b9984491ab557d71f0f9039396cacaa1a,PodSandboxId:9349459c27ccd5de08e116bf0c46d7ef69070ef7dbd7e9c32d5f7cffce3ee819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9
a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721403434776487584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb70c7cbad8cadad0e4e9d11d1a35230083101fffd2fd3094ca66ada34552c3,PodSandboxId:5ceef0ea5709808b506f5e55fa479b15e166d45443994c10966014bbd2726ca7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b19
8de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721403434891350825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70930ba7a22eb92addb9fe0f8cfd5bc3b4a683fb4b37ada81396c29af0cec35b,PodSandboxId:ae43caa59ed913c8529d4a377d8825fd6ff1e0a18a4e483a040d17dd250ab2e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721403434558972507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc6cc903147989448904aa27e343e7e784bd9a127c5a140b34eb7008e289e41,PodSandboxId:f7740a4056135b002bee87c110b3868cc54e02e31ee7fa689a6ecb9bb42da1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403365956009889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec5631c7-aedc-41dd-a3c6-a3ac96708b99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.212233831Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab2f701d-59b9-4aa4-a06d-5ab8f6d76762 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.212406675Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab2f701d-59b9-4aa4-a06d-5ab8f6d76762 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.213603526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02ca955e-254b-4837-8642-c53fb93599be name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.213994367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403450213974749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02ca955e-254b-4837-8642-c53fb93599be name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.214420646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0378c13-470a-4f3a-81d0-12d4fe3bfe17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.214474626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0378c13-470a-4f3a-81d0-12d4fe3bfe17 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:37:30 kubernetes-upgrade-574044 crio[3114]: time="2024-07-19 15:37:30.214778841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20df38e4357672672af57b5f6f3e530050702b0318396441c6b9dfe273b686c6,PodSandboxId:6bc190ddc2b3ca2333bbaaa62b7e0d838ec35f3b7b3c12bba72bb62981fd2580,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447085191172,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11de784c5dec3370e0c6c5a8284cbb1f41c5693b2788b8f6a18e90744e4dd2fb,PodSandboxId:7f9e306a38fc436a75b7ac32fe5937b4ea6404f7fc175d3e56c1cb5d2c8038b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403447022725420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3b3c2d66a70deee5bdbc7f5dbc1f63b0046cae10c9c622eb61f650786f976c2,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721403447123911128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dda0a43c09d6e00ba8e1e46384da529d1298a107ec8de1eb1ff766bcbad8877,PodSandboxId:ee0a684ea35ddeb8637836c83d7fef59d9758a70da0bd4c7debe7e2697885be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721403447064110961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ac0c782d2664131d539b0cf9d93fa60900b50d5b4326fa5c09da4b1cb6cc037,PodSandboxId:d15457b03acea7476fd52c707c79cb7a4c4af7c4503cbf77af0f9a565e6abecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721403442256069845,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f051a44d3e903efad10621622f34c07838db177eb4d62404a896f6936f2e224,PodSandboxId:f6cc8c28a6ae0330e96305cde9ea6fe96e45e3c6945a416ea1a4a30b06d151e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721403442251772734,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:097a3d8ea1833e9abd87953a535b98b9035d723609cfb2b513ed4530fb23a06b,PodSandboxId:8a08b3b77a625651a0e9a94ece182853c3fcc2d3f9f0502d28b60ec863253c3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721403442271630403,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67c22c128ab81d102d7f57e2dcc0d12f6fe03565a0ba9290fc90595a48fe656,PodSandboxId:26dcd4530eb40bb1a8ac2cf3f9591ed701a9113d695148b6caf0b89f75741686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721403442235398290,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e1740b79e6c71090105697792c55f34ae8ca53f46bcfc6222d8aa17eb45304,PodSandboxId:7f6c1bb9bce3d0420aa2c5a8ff91adf7ab8c6c4b3c6bfdcb18a01aaa76cb0d37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721403439197394998,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3552aa-ed67-407a-acb9-cfd884c23880,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f88abd50b7ca6934c48a8ad098c4cdf51fb80245a6ce31fb61d4c660ab96070,PodSandboxId:51f2bd168a563a6ed90e2985045902f53192d021b56dd6d0f195a72a1892d7df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403436403910038,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zcrt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9771e635-26f5-4e02-92d8-74a0a1c86ce2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc4c30ce1eebe67a3a38a45d1025bc2052578b5977e86ce57e5d375acd200f1,PodSandboxId:c461bf41993f6769e262cd96d46a869a977e7d3de7005a3b1a0f3f56a2896429,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721403434607229278,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f2p4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66c38e23-d3d8-4453-9081-43b6ba85c5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa92adb9509a99426ec575126e6f44e6d2a78cdea241a82a26b5aeff64f4c297,PodSandboxId:b3c63fff87d882cf23ba0c4812a20044a6dc5cbbded7568ba35be54d6e40aa86,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721403434871980006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16414ab327eb63fd8cd964c69c918e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4ae87b1b6ad1712f7cbb6c3c7403b9984491ab557d71f0f9039396cacaa1a,PodSandboxId:9349459c27ccd5de08e116bf0c46d7ef69070ef7dbd7e9c32d5f7cffce3ee819,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9
a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721403434776487584,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3946a4d95b9c399b59ace2b7960c3df2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb70c7cbad8cadad0e4e9d11d1a35230083101fffd2fd3094ca66ada34552c3,PodSandboxId:5ceef0ea5709808b506f5e55fa479b15e166d45443994c10966014bbd2726ca7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b19
8de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721403434891350825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22d240e5a10d11ba40bf10171aeaa34d,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70930ba7a22eb92addb9fe0f8cfd5bc3b4a683fb4b37ada81396c29af0cec35b,PodSandboxId:ae43caa59ed913c8529d4a377d8825fd6ff1e0a18a4e483a040d17dd250ab2e8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721403434558972507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-574044,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cc72a465c59ecce6f45893c28fbd080,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc6cc903147989448904aa27e343e7e784bd9a127c5a140b34eb7008e289e41,PodSandboxId:f7740a4056135b002bee87c110b3868cc54e02e31ee7fa689a6ecb9bb42da1b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403365956009889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-sftk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39bb3ec-2ec5-4117-80d0-9ab63dd55554,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0378c13-470a-4f3a-81d0-12d4fe3bfe17 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a3b3c2d66a70d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       3                   7f6c1bb9bce3d       storage-provisioner
	20df38e435767       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   2                   6bc190ddc2b3c       coredns-5cfdc65f69-zcrt5
	5dda0a43c09d6       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago        Running             kube-proxy                2                   ee0a684ea35dd       kube-proxy-f2p4s
	11de784c5dec3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   7f9e306a38fc4       coredns-5cfdc65f69-sftk4
	097a3d8ea1833       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   8 seconds ago        Running             kube-apiserver            2                   8a08b3b77a625       kube-apiserver-kubernetes-upgrade-574044
	3ac0c782d2664       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   8 seconds ago        Running             kube-scheduler            2                   d15457b03acea       kube-scheduler-kubernetes-upgrade-574044
	0f051a44d3e90       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   8 seconds ago        Running             etcd                      2                   f6cc8c28a6ae0       etcd-kubernetes-upgrade-574044
	a67c22c128ab8       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   8 seconds ago        Running             kube-controller-manager   2                   26dcd4530eb40       kube-controller-manager-kubernetes-upgrade-574044
	87e1740b79e6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago       Exited              storage-provisioner       2                   7f6c1bb9bce3d       storage-provisioner
	3f88abd50b7ca       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago       Exited              coredns                   1                   51f2bd168a563       coredns-5cfdc65f69-zcrt5
	9bb70c7cbad8c       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 seconds ago       Exited              etcd                      1                   5ceef0ea57098       etcd-kubernetes-upgrade-574044
	fa92adb9509a9       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 seconds ago       Exited              kube-scheduler            1                   b3c63fff87d88       kube-scheduler-kubernetes-upgrade-574044
	dda4ae87b1b6a       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   15 seconds ago       Exited              kube-apiserver            1                   9349459c27ccd       kube-apiserver-kubernetes-upgrade-574044
	8cc4c30ce1eeb       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   15 seconds ago       Exited              kube-proxy                1                   c461bf41993f6       kube-proxy-f2p4s
	70930ba7a22eb       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 seconds ago       Exited              kube-controller-manager   1                   ae43caa59ed91       kube-controller-manager-kubernetes-upgrade-574044
	5fc6cc9031479       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   f7740a4056135       coredns-5cfdc65f69-sftk4
	
	
	==> coredns [11de784c5dec3370e0c6c5a8284cbb1f41c5693b2788b8f6a18e90744e4dd2fb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [20df38e4357672672af57b5f6f3e530050702b0318396441c6b9dfe273b686c6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [3f88abd50b7ca6934c48a8ad098c4cdf51fb80245a6ce31fb61d4c660ab96070] <==
	
	
	==> coredns [5fc6cc903147989448904aa27e343e7e784bd9a127c5a140b34eb7008e289e41] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[233077609]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 15:36:06.259) (total time: 30001ms):
	Trace[233077609]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:36:36.261)
	Trace[233077609]: [30.00158163s] [30.00158163s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1143543115]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 15:36:06.260) (total time: 30001ms):
	Trace[1143543115]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (15:36:36.261)
	Trace[1143543115]: [30.001272679s] [30.001272679s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2131735206]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 15:36:06.260) (total time: 30001ms):
	Trace[2131735206]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:36:36.261)
	Trace[2131735206]: [30.001157456s] [30.001157456s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-574044
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-574044
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:35:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-574044
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:37:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:37:26 +0000   Fri, 19 Jul 2024 15:35:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:37:26 +0000   Fri, 19 Jul 2024 15:35:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:37:26 +0000   Fri, 19 Jul 2024 15:35:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:37:26 +0000   Fri, 19 Jul 2024 15:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.87
	  Hostname:    kubernetes-upgrade-574044
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 881a7c4ab37b4b69b395dfeba84118ba
	  System UUID:                881a7c4a-b37b-4b69-b395-dfeba84118ba
	  Boot ID:                    5043c308-ca0f-46f8-8d6e-fa1a1458ec83
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-sftk4                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                 coredns-5cfdc65f69-zcrt5                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                 etcd-kubernetes-upgrade-574044                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 kube-apiserver-kubernetes-upgrade-574044             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-574044    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-f2p4s                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-kubernetes-upgrade-574044             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node kubernetes-upgrade-574044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node kubernetes-upgrade-574044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 97s)  kubelet          Node kubernetes-upgrade-574044 status is now: NodeHasSufficientPID
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           86s                node-controller  Node kubernetes-upgrade-574044 event: Registered Node kubernetes-upgrade-574044 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-574044 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-574044 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-574044 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-574044 event: Registered Node kubernetes-upgrade-574044 in Controller
	
	
	==> dmesg <==
	[  +2.475445] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.121452] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.057928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061676] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.166081] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.141967] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.278790] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +4.162390] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +2.125848] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.056094] kauditd_printk_skb: 158 callbacks suppressed
	[Jul19 15:36] systemd-fstab-generator[1246]: Ignoring "noauto" option for root device
	[  +0.093736] kauditd_printk_skb: 69 callbacks suppressed
	[ +32.218262] kauditd_printk_skb: 107 callbacks suppressed
	[Jul19 15:37] systemd-fstab-generator[2506]: Ignoring "noauto" option for root device
	[  +0.328116] systemd-fstab-generator[2634]: Ignoring "noauto" option for root device
	[  +0.441991] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[  +0.386367] systemd-fstab-generator[2859]: Ignoring "noauto" option for root device
	[  +0.521734] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[  +1.664039] systemd-fstab-generator[3419]: Ignoring "noauto" option for root device
	[  +1.152259] kauditd_printk_skb: 270 callbacks suppressed
	[  +2.294220] systemd-fstab-generator[4060]: Ignoring "noauto" option for root device
	[  +5.717461] kauditd_printk_skb: 68 callbacks suppressed
	[  +1.138782] systemd-fstab-generator[4615]: Ignoring "noauto" option for root device
	
	
	==> etcd [0f051a44d3e903efad10621622f34c07838db177eb4d62404a896f6936f2e224] <==
	{"level":"info","ts":"2024-07-19T15:37:22.75349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a switched to configuration voters=(12310432666106675562)"}
	{"level":"info","ts":"2024-07-19T15:37:22.758546Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","added-peer-id":"aad771494ea7416a","added-peer-peer-urls":["https://192.168.39.87:2380"]}
	{"level":"info","ts":"2024-07-19T15:37:22.758646Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:37:22.75869Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:37:22.762914Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:37:22.76558Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:37:22.767347Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:37:22.763426Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-07-19T15:37:22.767902Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-07-19T15:37:23.676433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:37:23.676564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:37:23.676703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgPreVoteResp from aad771494ea7416a at term 2"}
	{"level":"info","ts":"2024-07-19T15:37:23.676755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:37:23.676785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a received MsgVoteResp from aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-07-19T15:37:23.676823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:37:23.676863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aad771494ea7416a elected leader aad771494ea7416a at term 3"}
	{"level":"info","ts":"2024-07-19T15:37:23.688696Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aad771494ea7416a","local-member-attributes":"{Name:kubernetes-upgrade-574044 ClientURLs:[https://192.168.39.87:2379]}","request-path":"/0/members/aad771494ea7416a/attributes","cluster-id":"8794d44e1d88e05d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:37:23.688968Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:37:23.690447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:37:23.693514Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.87:2379"}
	{"level":"info","ts":"2024-07-19T15:37:23.6936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:37:23.695372Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:37:23.697585Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:37:23.697068Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:37:23.701206Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [9bb70c7cbad8cadad0e4e9d11d1a35230083101fffd2fd3094ca66ada34552c3] <==
	{"level":"info","ts":"2024-07-19T15:37:15.730935Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-19T15:37:15.783622Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","commit-index":452}
	{"level":"info","ts":"2024-07-19T15:37:15.783747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-19T15:37:15.783787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a became follower at term 2"}
	{"level":"info","ts":"2024-07-19T15:37:15.783797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aad771494ea7416a [peers: [], term: 2, commit: 452, applied: 0, lastindex: 452, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-19T15:37:15.792386Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-19T15:37:15.807182Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":432}
	{"level":"info","ts":"2024-07-19T15:37:15.814071Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-19T15:37:15.817031Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aad771494ea7416a","timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:37:15.817442Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aad771494ea7416a"}
	{"level":"info","ts":"2024-07-19T15:37:15.817481Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"aad771494ea7416a","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-19T15:37:15.81798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:37:15.819964Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:37:15.841788Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-19T15:37:15.843206Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aad771494ea7416a","initial-advertise-peer-urls":["https://192.168.39.87:2380"],"listen-peer-urls":["https://192.168.39.87:2380"],"advertise-client-urls":["https://192.168.39.87:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.87:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:37:15.843242Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:37:15.843361Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:37:15.843396Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:37:15.843404Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:37:15.84401Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-07-19T15:37:15.844021Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.87:2380"}
	{"level":"info","ts":"2024-07-19T15:37:15.849771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aad771494ea7416a switched to configuration voters=(12310432666106675562)"}
	{"level":"info","ts":"2024-07-19T15:37:15.849857Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","added-peer-id":"aad771494ea7416a","added-peer-peer-urls":["https://192.168.39.87:2380"]}
	{"level":"info","ts":"2024-07-19T15:37:15.849956Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8794d44e1d88e05d","local-member-id":"aad771494ea7416a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:37:15.850016Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 15:37:30 up 2 min,  0 users,  load average: 2.05, 0.54, 0.18
	Linux kubernetes-upgrade-574044 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [097a3d8ea1833e9abd87953a535b98b9035d723609cfb2b513ed4530fb23a06b] <==
	I0719 15:37:26.123545       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 15:37:26.131092       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0719 15:37:26.131146       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0719 15:37:26.134146       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 15:37:26.134257       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 15:37:26.134371       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 15:37:26.134155       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 15:37:26.135446       1 aggregator.go:171] initial CRD sync complete...
	I0719 15:37:26.135511       1 autoregister_controller.go:144] Starting autoregister controller
	I0719 15:37:26.135608       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 15:37:26.135642       1 cache.go:39] Caches are synced for autoregister controller
	I0719 15:37:26.136518       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0719 15:37:26.147195       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 15:37:26.156575       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 15:37:26.158005       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 15:37:26.158077       1 policy_source.go:224] refreshing policies
	I0719 15:37:26.158579       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 15:37:26.931072       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 15:37:27.963427       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 15:37:27.980288       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 15:37:28.073177       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 15:37:28.147664       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 15:37:28.158581       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 15:37:30.324649       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 15:37:30.473127       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [dda4ae87b1b6ad1712f7cbb6c3c7403b9984491ab557d71f0f9039396cacaa1a] <==
	I0719 15:37:15.964291       1 options.go:228] external host was not specified, using 192.168.39.87
	I0719 15:37:15.995817       1 server.go:142] Version: v1.31.0-beta.0
	I0719 15:37:15.995885       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [70930ba7a22eb92addb9fe0f8cfd5bc3b4a683fb4b37ada81396c29af0cec35b] <==
	I0719 15:37:16.303180       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-controller-manager [a67c22c128ab81d102d7f57e2dcc0d12f6fe03565a0ba9290fc90595a48fe656] <==
	I0719 15:37:30.351805       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 15:37:30.397986       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0719 15:37:30.414377       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 15:37:30.414545       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-574044"
	I0719 15:37:30.418279       1 shared_informer.go:320] Caches are synced for taint
	I0719 15:37:30.418680       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0719 15:37:30.419637       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-574044"
	I0719 15:37:30.419717       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 15:37:30.419847       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 15:37:30.420281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="77.23494ms"
	I0719 15:37:30.433020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="64.951µs"
	I0719 15:37:30.445406       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0719 15:37:30.475180       1 shared_informer.go:320] Caches are synced for HPA
	I0719 15:37:30.488917       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0719 15:37:30.490839       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0719 15:37:30.493128       1 shared_informer.go:320] Caches are synced for job
	I0719 15:37:30.500494       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0719 15:37:30.500583       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0719 15:37:30.508432       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 15:37:30.536800       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:37:30.543387       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0719 15:37:30.544596       1 shared_informer.go:320] Caches are synced for cronjob
	I0719 15:37:30.549931       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 15:37:30.563566       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:37:30.563604       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5dda0a43c09d6e00ba8e1e46384da529d1298a107ec8de1eb1ff766bcbad8877] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 15:37:27.476879       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 15:37:27.495677       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.87"]
	E0719 15:37:27.495790       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 15:37:27.569376       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 15:37:27.569432       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:37:27.569469       1 server_linux.go:170] "Using iptables Proxier"
	I0719 15:37:27.576786       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 15:37:27.577082       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 15:37:27.577129       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:37:27.578574       1 config.go:197] "Starting service config controller"
	I0719 15:37:27.578680       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:37:27.578734       1 config.go:104] "Starting endpoint slice config controller"
	I0719 15:37:27.578755       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:37:27.579434       1 config.go:326] "Starting node config controller"
	I0719 15:37:27.585110       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:37:27.679463       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:37:27.679427       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:37:27.685829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8cc4c30ce1eebe67a3a38a45d1025bc2052578b5977e86ce57e5d375acd200f1] <==
	
	
	==> kube-scheduler [3ac0c782d2664131d539b0cf9d93fa60900b50d5b4326fa5c09da4b1cb6cc037] <==
	I0719 15:37:24.103493       1 serving.go:386] Generated self-signed cert in-memory
	W0719 15:37:26.039956       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:37:26.040068       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:37:26.040130       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:37:26.040162       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:37:26.107091       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0719 15:37:26.107143       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:37:26.111988       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:37:26.112105       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:37:26.112259       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:37:26.112458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0719 15:37:26.213151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fa92adb9509a99426ec575126e6f44e6d2a78cdea241a82a26b5aeff64f4c297] <==
	
	
	==> kubelet <==
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:22.220525    4067 scope.go:117] "RemoveContainer" containerID="70930ba7a22eb92addb9fe0f8cfd5bc3b4a683fb4b37ada81396c29af0cec35b"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:22.222870    4067 scope.go:117] "RemoveContainer" containerID="9bb70c7cbad8cadad0e4e9d11d1a35230083101fffd2fd3094ca66ada34552c3"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:22.223403    4067 scope.go:117] "RemoveContainer" containerID="fa92adb9509a99426ec575126e6f44e6d2a78cdea241a82a26b5aeff64f4c297"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:22.225292    4067 scope.go:117] "RemoveContainer" containerID="dda4ae87b1b6ad1712f7cbb6c3c7403b9984491ab557d71f0f9039396cacaa1a"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: E0719 15:37:22.353392    4067 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-574044?timeout=10s\": dial tcp 192.168.39.87:8443: connect: connection refused" interval="800ms"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:22.447950    4067 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-574044"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: E0719 15:37:22.448795    4067 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.87:8443: connect: connection refused" node="kubernetes-upgrade-574044"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: W0719 15:37:22.508436    4067 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-574044&limit=500&resourceVersion=0": dial tcp 192.168.39.87:8443: connect: connection refused
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: E0719 15:37:22.508562    4067 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-574044&limit=500&resourceVersion=0\": dial tcp 192.168.39.87:8443: connect: connection refused" logger="UnhandledError"
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: W0719 15:37:22.678163    4067 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.87:8443: connect: connection refused
	Jul 19 15:37:22 kubernetes-upgrade-574044 kubelet[4067]: E0719 15:37:22.678243    4067 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.87:8443: connect: connection refused" logger="UnhandledError"
	Jul 19 15:37:23 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:23.250788    4067 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-574044"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.231885    4067 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-574044"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.232030    4067 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-574044"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.232066    4067 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.233132    4067 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.699548    4067 apiserver.go:52] "Watching apiserver"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.729397    4067 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.762737    4067 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c3552aa-ed67-407a-acb9-cfd884c23880-tmp\") pod \"storage-provisioner\" (UID: \"5c3552aa-ed67-407a-acb9-cfd884c23880\") " pod="kube-system/storage-provisioner"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.783032    4067 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66c38e23-d3d8-4453-9081-43b6ba85c5f7-lib-modules\") pod \"kube-proxy-f2p4s\" (UID: \"66c38e23-d3d8-4453-9081-43b6ba85c5f7\") " pod="kube-system/kube-proxy-f2p4s"
	Jul 19 15:37:26 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:26.787554    4067 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66c38e23-d3d8-4453-9081-43b6ba85c5f7-xtables-lock\") pod \"kube-proxy-f2p4s\" (UID: \"66c38e23-d3d8-4453-9081-43b6ba85c5f7\") " pod="kube-system/kube-proxy-f2p4s"
	Jul 19 15:37:27 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:27.007250    4067 scope.go:117] "RemoveContainer" containerID="5fc6cc903147989448904aa27e343e7e784bd9a127c5a140b34eb7008e289e41"
	Jul 19 15:37:27 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:27.008740    4067 scope.go:117] "RemoveContainer" containerID="8cc4c30ce1eebe67a3a38a45d1025bc2052578b5977e86ce57e5d375acd200f1"
	Jul 19 15:37:27 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:27.009863    4067 scope.go:117] "RemoveContainer" containerID="3f88abd50b7ca6934c48a8ad098c4cdf51fb80245a6ce31fb61d4c660ab96070"
	Jul 19 15:37:27 kubernetes-upgrade-574044 kubelet[4067]: I0719 15:37:27.010379    4067 scope.go:117] "RemoveContainer" containerID="87e1740b79e6c71090105697792c55f34ae8ca53f46bcfc6222d8aa17eb45304"
	
	
	==> storage-provisioner [87e1740b79e6c71090105697792c55f34ae8ca53f46bcfc6222d8aa17eb45304] <==
	I0719 15:37:19.375969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 15:37:19.381881       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a3b3c2d66a70deee5bdbc7f5dbc1f63b0046cae10c9c622eb61f650786f976c2] <==
	I0719 15:37:27.301257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:37:27.351173       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:37:27.351413       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-574044 -n kubernetes-upgrade-574044
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-574044 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-574044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-574044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-574044: (1.089065275s)
--- FAIL: TestKubernetesUpgrade (405.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.69s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-464954 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-464954 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.755819493s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-464954] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-464954" primary control-plane node in "pause-464954" cluster
	* Updating the running kvm2 "pause-464954" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-464954" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:34:35.695485   52659 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:34:35.695771   52659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:34:35.695781   52659 out.go:304] Setting ErrFile to fd 2...
	I0719 15:34:35.695787   52659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:34:35.695996   52659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:34:35.696531   52659 out.go:298] Setting JSON to false
	I0719 15:34:35.697473   52659 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4622,"bootTime":1721398654,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:34:35.697529   52659 start.go:139] virtualization: kvm guest
	I0719 15:34:35.699631   52659 out.go:177] * [pause-464954] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:34:35.701326   52659 notify.go:220] Checking for updates...
	I0719 15:34:35.701334   52659 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:34:35.702739   52659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:34:35.704283   52659 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:34:35.705689   52659 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:34:35.706980   52659 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:34:35.708227   52659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:34:35.710110   52659 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:34:35.710781   52659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:34:35.710831   52659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:34:35.726473   52659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34965
	I0719 15:34:35.726946   52659 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:34:35.727634   52659 main.go:141] libmachine: Using API Version  1
	I0719 15:34:35.727664   52659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:34:35.728039   52659 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:34:35.728214   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:34:35.728528   52659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:34:35.728932   52659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:34:35.728995   52659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:34:35.747518   52659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I0719 15:34:35.747907   52659 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:34:35.748364   52659 main.go:141] libmachine: Using API Version  1
	I0719 15:34:35.748385   52659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:34:35.748739   52659 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:34:35.748944   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:34:35.784343   52659 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:34:35.785636   52659 start.go:297] selected driver: kvm2
	I0719 15:34:35.785663   52659 start.go:901] validating driver "kvm2" against &{Name:pause-464954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-464954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.48 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:34:35.785852   52659 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:34:35.786307   52659 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:34:35.786401   52659 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:34:35.803407   52659 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:34:35.804348   52659 cni.go:84] Creating CNI manager for ""
	I0719 15:34:35.804368   52659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:34:35.804455   52659 start.go:340] cluster config:
	{Name:pause-464954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-464954 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.48 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:34:35.804620   52659 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:34:35.806485   52659 out.go:177] * Starting "pause-464954" primary control-plane node in "pause-464954" cluster
	I0719 15:34:35.807632   52659 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:34:35.807667   52659 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:34:35.807676   52659 cache.go:56] Caching tarball of preloaded images
	I0719 15:34:35.807782   52659 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:34:35.807792   52659 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:34:35.807905   52659 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/config.json ...
	I0719 15:34:35.808079   52659 start.go:360] acquireMachinesLock for pause-464954: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:34:56.935131   52659 start.go:364] duration metric: took 21.127025022s to acquireMachinesLock for "pause-464954"
	I0719 15:34:56.935181   52659 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:34:56.935187   52659 fix.go:54] fixHost starting: 
	I0719 15:34:56.935578   52659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:34:56.935623   52659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:34:56.952135   52659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
	I0719 15:34:56.952487   52659 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:34:56.952986   52659 main.go:141] libmachine: Using API Version  1
	I0719 15:34:56.953020   52659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:34:56.953342   52659 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:34:56.953509   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:34:56.953616   52659 main.go:141] libmachine: (pause-464954) Calling .GetState
	I0719 15:34:56.954955   52659 fix.go:112] recreateIfNeeded on pause-464954: state=Running err=<nil>
	W0719 15:34:56.954990   52659 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:34:56.957320   52659 out.go:177] * Updating the running kvm2 "pause-464954" VM ...
	I0719 15:34:56.958579   52659 machine.go:94] provisionDockerMachine start ...
	I0719 15:34:56.958597   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:34:56.958799   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:34:56.961527   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:56.961925   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:56.961969   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:56.962118   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:34:56.962305   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:56.962500   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:56.962668   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:34:56.962852   52659 main.go:141] libmachine: Using SSH client type: native
	I0719 15:34:56.963104   52659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.48 22 <nil> <nil>}
	I0719 15:34:56.963122   52659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:34:57.079209   52659 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-464954
	
	I0719 15:34:57.079235   52659 main.go:141] libmachine: (pause-464954) Calling .GetMachineName
	I0719 15:34:57.079458   52659 buildroot.go:166] provisioning hostname "pause-464954"
	I0719 15:34:57.079492   52659 main.go:141] libmachine: (pause-464954) Calling .GetMachineName
	I0719 15:34:57.079637   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:34:57.081926   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.082277   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:57.082315   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.082480   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:34:57.082648   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.082796   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.082918   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:34:57.083076   52659 main.go:141] libmachine: Using SSH client type: native
	I0719 15:34:57.083268   52659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.48 22 <nil> <nil>}
	I0719 15:34:57.083280   52659 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-464954 && echo "pause-464954" | sudo tee /etc/hostname
	I0719 15:34:57.211794   52659 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-464954
	
	I0719 15:34:57.211826   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:34:57.214963   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.215380   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:57.215401   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.215577   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:34:57.215763   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.215933   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.216071   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:34:57.216219   52659 main.go:141] libmachine: Using SSH client type: native
	I0719 15:34:57.216383   52659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.48 22 <nil> <nil>}
	I0719 15:34:57.216397   52659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-464954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-464954/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-464954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:34:57.327336   52659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:34:57.327364   52659 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:34:57.327401   52659 buildroot.go:174] setting up certificates
	I0719 15:34:57.327410   52659 provision.go:84] configureAuth start
	I0719 15:34:57.327419   52659 main.go:141] libmachine: (pause-464954) Calling .GetMachineName
	I0719 15:34:57.327702   52659 main.go:141] libmachine: (pause-464954) Calling .GetIP
	I0719 15:34:57.332910   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.333269   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:57.333296   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.333485   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:34:57.335862   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.336261   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:57.336287   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.336396   52659 provision.go:143] copyHostCerts
	I0719 15:34:57.336469   52659 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:34:57.336486   52659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:34:57.336555   52659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:34:57.336674   52659 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:34:57.336684   52659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:34:57.336716   52659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:34:57.336801   52659 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:34:57.336817   52659 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:34:57.336847   52659 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:34:57.336935   52659 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.pause-464954 san=[127.0.0.1 192.168.83.48 localhost minikube pause-464954]
	I0719 15:34:57.597275   52659 provision.go:177] copyRemoteCerts
	I0719 15:34:57.597336   52659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:34:57.597364   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:34:57.599892   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.600278   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:57.600303   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.600462   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:34:57.600627   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.600757   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:34:57.600879   52659 sshutil.go:53] new ssh client: &{IP:192.168.83.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/pause-464954/id_rsa Username:docker}
	I0719 15:34:57.693469   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:34:57.720553   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 15:34:57.752210   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:34:57.778215   52659 provision.go:87] duration metric: took 450.793348ms to configureAuth
	I0719 15:34:57.778258   52659 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:34:57.778500   52659 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:34:57.778582   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:34:57.781096   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.781413   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:34:57.781442   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:34:57.781580   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:34:57.781792   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.782005   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:34:57.782161   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:34:57.782380   52659 main.go:141] libmachine: Using SSH client type: native
	I0719 15:34:57.782650   52659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.48 22 <nil> <nil>}
	I0719 15:34:57.782680   52659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:35:05.347803   52659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:35:05.347832   52659 machine.go:97] duration metric: took 8.389238925s to provisionDockerMachine
	I0719 15:35:05.347844   52659 start.go:293] postStartSetup for "pause-464954" (driver="kvm2")
	I0719 15:35:05.347857   52659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:35:05.347877   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:35:05.348248   52659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:35:05.348281   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:35:05.351047   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.351391   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:35:05.351417   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.351541   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:35:05.351710   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:35:05.351852   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:35:05.352034   52659 sshutil.go:53] new ssh client: &{IP:192.168.83.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/pause-464954/id_rsa Username:docker}
	I0719 15:35:05.437766   52659 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:35:05.442241   52659 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:35:05.442268   52659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:35:05.442342   52659 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:35:05.442449   52659 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:35:05.442568   52659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:35:05.453089   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:35:05.478390   52659 start.go:296] duration metric: took 130.531895ms for postStartSetup
	I0719 15:35:05.478435   52659 fix.go:56] duration metric: took 8.543247553s for fixHost
	I0719 15:35:05.478460   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:35:05.481237   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.481561   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:35:05.481594   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.481759   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:35:05.481967   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:35:05.482126   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:35:05.482279   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:35:05.482431   52659 main.go:141] libmachine: Using SSH client type: native
	I0719 15:35:05.482603   52659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.48 22 <nil> <nil>}
	I0719 15:35:05.482616   52659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 15:35:05.595318   52659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721403305.588612301
	
	I0719 15:35:05.595344   52659 fix.go:216] guest clock: 1721403305.588612301
	I0719 15:35:05.595354   52659 fix.go:229] Guest: 2024-07-19 15:35:05.588612301 +0000 UTC Remote: 2024-07-19 15:35:05.478440303 +0000 UTC m=+29.817546466 (delta=110.171998ms)
	I0719 15:35:05.595398   52659 fix.go:200] guest clock delta is within tolerance: 110.171998ms
	I0719 15:35:05.595403   52659 start.go:83] releasing machines lock for "pause-464954", held for 8.660242202s
	I0719 15:35:05.595432   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:35:05.595721   52659 main.go:141] libmachine: (pause-464954) Calling .GetIP
	I0719 15:35:05.598230   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.598653   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:35:05.598681   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.598955   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:35:05.599485   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:35:05.599653   52659 main.go:141] libmachine: (pause-464954) Calling .DriverName
	I0719 15:35:05.599753   52659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:35:05.599793   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:35:05.599978   52659 ssh_runner.go:195] Run: cat /version.json
	I0719 15:35:05.600004   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHHostname
	I0719 15:35:05.602884   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.603270   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.603301   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:35:05.603323   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.603498   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:35:05.603678   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:35:05.603734   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:35:05.603758   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:05.603912   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:35:05.603966   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHPort
	I0719 15:35:05.604120   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHKeyPath
	I0719 15:35:05.604116   52659 sshutil.go:53] new ssh client: &{IP:192.168.83.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/pause-464954/id_rsa Username:docker}
	I0719 15:35:05.604265   52659 main.go:141] libmachine: (pause-464954) Calling .GetSSHUsername
	I0719 15:35:05.604379   52659 sshutil.go:53] new ssh client: &{IP:192.168.83.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/pause-464954/id_rsa Username:docker}
	I0719 15:35:05.712802   52659 ssh_runner.go:195] Run: systemctl --version
	I0719 15:35:05.720740   52659 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:35:05.886330   52659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:35:05.893217   52659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:35:05.893278   52659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:35:05.903497   52659 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 15:35:05.903526   52659 start.go:495] detecting cgroup driver to use...
	I0719 15:35:05.903602   52659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:35:05.923174   52659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:35:05.939136   52659 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:35:05.939196   52659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:35:05.954137   52659 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:35:05.971277   52659 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:35:06.153294   52659 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:35:06.308305   52659 docker.go:233] disabling docker service ...
	I0719 15:35:06.308377   52659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:35:06.364220   52659 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:35:06.414419   52659 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:35:06.767678   52659 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:35:07.105142   52659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:35:07.125008   52659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:35:07.314958   52659 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:35:07.315028   52659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.338392   52659 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:35:07.338471   52659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.384666   52659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.419131   52659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.526440   52659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:35:07.553484   52659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.574259   52659 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.594132   52659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:35:07.610663   52659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:35:07.626742   52659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:35:07.639928   52659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:35:07.884303   52659 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:35:08.572769   52659 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:35:08.572852   52659 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:35:08.578505   52659 start.go:563] Will wait 60s for crictl version
	I0719 15:35:08.578565   52659 ssh_runner.go:195] Run: which crictl
	I0719 15:35:08.583824   52659 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:35:08.633815   52659 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:35:08.633908   52659 ssh_runner.go:195] Run: crio --version
	I0719 15:35:08.674464   52659 ssh_runner.go:195] Run: crio --version
	I0719 15:35:08.710508   52659 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:35:08.711933   52659 main.go:141] libmachine: (pause-464954) Calling .GetIP
	I0719 15:35:08.715457   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:08.715874   52659 main.go:141] libmachine: (pause-464954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a5:79:f4", ip: ""} in network mk-pause-464954: {Iface:virbr2 ExpiryTime:2024-07-19 16:33:06 +0000 UTC Type:0 Mac:52:54:00:a5:79:f4 Iaid: IPaddr:192.168.83.48 Prefix:24 Hostname:pause-464954 Clientid:01:52:54:00:a5:79:f4}
	I0719 15:35:08.715901   52659 main.go:141] libmachine: (pause-464954) DBG | domain pause-464954 has defined IP address 192.168.83.48 and MAC address 52:54:00:a5:79:f4 in network mk-pause-464954
	I0719 15:35:08.716173   52659 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0719 15:35:08.722271   52659 kubeadm.go:883] updating cluster {Name:pause-464954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-464954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.48 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:35:08.722434   52659 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:35:08.722497   52659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:35:08.771200   52659 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:35:08.771226   52659 crio.go:433] Images already preloaded, skipping extraction
	I0719 15:35:08.771279   52659 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:35:08.810686   52659 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:35:08.810714   52659 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:35:08.810724   52659 kubeadm.go:934] updating node { 192.168.83.48 8443 v1.30.3 crio true true} ...
	I0719 15:35:08.810848   52659 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-464954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-464954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:35:08.810996   52659 ssh_runner.go:195] Run: crio config
	I0719 15:35:08.879083   52659 cni.go:84] Creating CNI manager for ""
	I0719 15:35:08.879105   52659 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:35:08.879117   52659 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:35:08.879145   52659 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.48 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-464954 NodeName:pause-464954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:35:08.879347   52659 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-464954"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:35:08.879416   52659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:35:08.895214   52659 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:35:08.895315   52659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:35:08.906403   52659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0719 15:35:08.928866   52659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:35:08.951929   52659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 15:35:08.982491   52659 ssh_runner.go:195] Run: grep 192.168.83.48	control-plane.minikube.internal$ /etc/hosts
	I0719 15:35:08.987432   52659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:35:09.153443   52659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:35:09.169498   52659 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954 for IP: 192.168.83.48
	I0719 15:35:09.169521   52659 certs.go:194] generating shared ca certs ...
	I0719 15:35:09.169539   52659 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:09.169722   52659 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:35:09.169774   52659 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:35:09.169785   52659 certs.go:256] generating profile certs ...
	I0719 15:35:09.169883   52659 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.key
	I0719 15:35:09.169961   52659 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/apiserver.key.b3bca0d2
	I0719 15:35:09.170016   52659 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/proxy-client.key
	I0719 15:35:09.170157   52659 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:35:09.170196   52659 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:35:09.170208   52659 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:35:09.170266   52659 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:35:09.170300   52659 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:35:09.170352   52659 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:35:09.170408   52659 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:35:09.171177   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:35:09.204493   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:35:09.234059   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:35:09.265899   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:35:09.297820   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 15:35:09.328796   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:35:09.359454   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:35:09.391214   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:35:09.422902   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:35:09.458576   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:35:09.489496   52659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:35:09.526961   52659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:35:09.618528   52659 ssh_runner.go:195] Run: openssl version
	I0719 15:35:09.633341   52659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:35:09.703212   52659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:35:09.714837   52659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:35:09.714899   52659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:35:09.726801   52659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:35:09.746720   52659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:35:09.775978   52659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:35:09.782963   52659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:35:09.783078   52659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:35:09.791701   52659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:35:09.806088   52659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:35:09.861473   52659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:35:09.884717   52659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:35:09.884791   52659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:35:09.899694   52659 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:35:09.937695   52659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:35:09.956005   52659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:35:09.963398   52659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:35:09.975786   52659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:35:09.986220   52659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:35:09.995471   52659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:35:10.016310   52659 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:35:10.054437   52659 kubeadm.go:392] StartCluster: {Name:pause-464954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-464954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.48 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:35:10.054635   52659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:35:10.054748   52659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:35:10.143034   52659 cri.go:89] found id: "61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2"
	I0719 15:35:10.143109   52659 cri.go:89] found id: "9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee"
	I0719 15:35:10.143126   52659 cri.go:89] found id: "7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66"
	I0719 15:35:10.143140   52659 cri.go:89] found id: "2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec"
	I0719 15:35:10.143153   52659 cri.go:89] found id: "5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba"
	I0719 15:35:10.143167   52659 cri.go:89] found id: "e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f"
	I0719 15:35:10.143179   52659 cri.go:89] found id: ""
	I0719 15:35:10.143240   52659 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-464954 -n pause-464954
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-464954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-464954 logs -n 25: (1.351444812s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo find                           | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo crio                           | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-526259                                     | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:34 UTC |
	| start   | -p force-systemd-env-802753                          | force-systemd-env-802753  | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-464954                                      | pause-464954              | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-490845 sudo                          | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-490845                               | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:34 UTC |
	| start   | -p NoKubernetes-490845                               | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-574044                         | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| delete  | -p force-systemd-env-802753                          | force-systemd-env-802753  | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p kubernetes-upgrade-574044                         | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-632791                         | force-systemd-flag-632791 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-490845 sudo                          | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-490845                               | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p cert-expiration-939600                            | cert-expiration-939600    | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:35:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:35:28.686915   53710 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:35:28.687136   53710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:35:28.687139   53710 out.go:304] Setting ErrFile to fd 2...
	I0719 15:35:28.687142   53710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:35:28.687320   53710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:35:28.687865   53710 out.go:298] Setting JSON to false
	I0719 15:35:28.688779   53710 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4675,"bootTime":1721398654,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:35:28.688829   53710 start.go:139] virtualization: kvm guest
	I0719 15:35:28.690920   53710 out.go:177] * [cert-expiration-939600] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:35:28.692709   53710 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:35:28.692748   53710 notify.go:220] Checking for updates...
	I0719 15:35:28.695513   53710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:35:28.696823   53710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:35:28.698009   53710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:35:28.699197   53710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:35:28.700371   53710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:35:28.701919   53710 config.go:182] Loaded profile config "force-systemd-flag-632791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:35:28.702000   53710 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:35:28.702100   53710 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:35:28.702171   53710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:35:28.742002   53710 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 15:35:28.743520   53710 start.go:297] selected driver: kvm2
	I0719 15:35:28.743538   53710 start.go:901] validating driver "kvm2" against <nil>
	I0719 15:35:28.743551   53710 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:35:28.744405   53710 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:35:28.744486   53710 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:35:28.759822   53710 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:35:28.759873   53710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 15:35:28.760084   53710 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 15:35:28.760100   53710 cni.go:84] Creating CNI manager for ""
	I0719 15:35:28.760106   53710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:35:28.760114   53710 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:35:28.760172   53710 start.go:340] cluster config:
	{Name:cert-expiration-939600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-939600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:35:28.760300   53710 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:35:28.762327   53710 out.go:177] * Starting "cert-expiration-939600" primary control-plane node in "cert-expiration-939600" cluster
	I0719 15:35:27.650793   52659 pod_ready.go:102] pod "kube-apiserver-pause-464954" in "kube-system" namespace has status "Ready":"False"
	I0719 15:35:30.149185   52659 pod_ready.go:92] pod "kube-apiserver-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.149209   52659 pod_ready.go:81] duration metric: took 4.506441711s for pod "kube-apiserver-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.149222   52659 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.153847   52659 pod_ready.go:92] pod "kube-controller-manager-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.153868   52659 pod_ready.go:81] duration metric: took 4.638428ms for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.153879   52659 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.160172   52659 pod_ready.go:92] pod "kube-proxy-n8sj4" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.160191   52659 pod_ready.go:81] duration metric: took 6.304567ms for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.160201   52659 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.164536   52659 pod_ready.go:92] pod "kube-scheduler-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.164553   52659 pod_ready.go:81] duration metric: took 4.345601ms for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.164562   52659 pod_ready.go:38] duration metric: took 12.540856801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:35:30.164582   52659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:35:30.178922   52659 ops.go:34] apiserver oom_adj: -16
	I0719 15:35:30.178943   52659 kubeadm.go:597] duration metric: took 19.957095987s to restartPrimaryControlPlane
	I0719 15:35:30.178950   52659 kubeadm.go:394] duration metric: took 20.124522774s to StartCluster
	I0719 15:35:30.178965   52659 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:30.179026   52659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:35:30.179597   52659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:30.179803   52659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.48 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:35:30.179885   52659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:35:30.180103   52659 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:35:30.181625   52659 out.go:177] * Enabled addons: 
	I0719 15:35:30.181633   52659 out.go:177] * Verifying Kubernetes components...
	I0719 15:35:30.182758   52659 addons.go:510] duration metric: took 2.876402ms for enable addons: enabled=[]
	I0719 15:35:30.182789   52659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:35:30.372160   52659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:35:30.387851   52659 node_ready.go:35] waiting up to 6m0s for node "pause-464954" to be "Ready" ...
	I0719 15:35:30.390945   52659 node_ready.go:49] node "pause-464954" has status "Ready":"True"
	I0719 15:35:30.390966   52659 node_ready.go:38] duration metric: took 3.088506ms for node "pause-464954" to be "Ready" ...
	I0719 15:35:30.390976   52659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:35:30.397434   52659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5625x" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.546140   52659 pod_ready.go:92] pod "coredns-7db6d8ff4d-5625x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.546166   52659 pod_ready.go:81] duration metric: took 148.708319ms for pod "coredns-7db6d8ff4d-5625x" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.546176   52659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:28.589092   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:28.589502   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:28.589526   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:28.589470   53543 retry.go:31] will retry after 584.355018ms: waiting for machine to come up
	I0719 15:35:29.174891   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.175605   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.175632   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:29.175532   53543 retry.go:31] will retry after 783.407425ms: waiting for machine to come up
	I0719 15:35:29.960543   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.961079   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.961102   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:29.961041   53543 retry.go:31] will retry after 1.119754414s: waiting for machine to come up
	I0719 15:35:31.082605   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:31.083054   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:31.083091   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:31.083013   53543 retry.go:31] will retry after 1.172135057s: waiting for machine to come up
	I0719 15:35:32.257369   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:32.257783   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:32.257803   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:32.257752   53543 retry.go:31] will retry after 1.253346183s: waiting for machine to come up
	I0719 15:35:30.947121   52659 pod_ready.go:92] pod "etcd-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.947148   52659 pod_ready.go:81] duration metric: took 400.963831ms for pod "etcd-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.947162   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.346896   52659 pod_ready.go:92] pod "kube-apiserver-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:31.346921   52659 pod_ready.go:81] duration metric: took 399.750861ms for pod "kube-apiserver-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.346935   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.746901   52659 pod_ready.go:92] pod "kube-controller-manager-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:31.746923   52659 pod_ready.go:81] duration metric: took 399.980014ms for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.746935   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.151350   52659 pod_ready.go:92] pod "kube-proxy-n8sj4" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:32.151376   52659 pod_ready.go:81] duration metric: took 404.433709ms for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.151387   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.547604   52659 pod_ready.go:92] pod "kube-scheduler-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:32.547634   52659 pod_ready.go:81] duration metric: took 396.237781ms for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.547646   52659 pod_ready.go:38] duration metric: took 2.156654213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:35:32.547662   52659 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:35:32.547713   52659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:35:32.563294   52659 api_server.go:72] duration metric: took 2.383461605s to wait for apiserver process to appear ...
	I0719 15:35:32.563322   52659 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:35:32.563341   52659 api_server.go:253] Checking apiserver healthz at https://192.168.83.48:8443/healthz ...
	I0719 15:35:32.567385   52659 api_server.go:279] https://192.168.83.48:8443/healthz returned 200:
	ok
	I0719 15:35:32.568578   52659 api_server.go:141] control plane version: v1.30.3
	I0719 15:35:32.568592   52659 api_server.go:131] duration metric: took 5.264715ms to wait for apiserver health ...
	I0719 15:35:32.568599   52659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:35:32.750080   52659 system_pods.go:59] 6 kube-system pods found
	I0719 15:35:32.750115   52659 system_pods.go:61] "coredns-7db6d8ff4d-5625x" [cd203d01-af11-4b86-87be-6fbb2d51114f] Running
	I0719 15:35:32.750122   52659 system_pods.go:61] "etcd-pause-464954" [ae13a722-5ab7-4535-ac4c-2c4c647c3cbb] Running
	I0719 15:35:32.750128   52659 system_pods.go:61] "kube-apiserver-pause-464954" [ff5fbacc-f26a-4bad-97b4-4229ae279255] Running
	I0719 15:35:32.750135   52659 system_pods.go:61] "kube-controller-manager-pause-464954" [8e69da1f-5849-4a11-a0bd-9482eb6c393b] Running
	I0719 15:35:32.750139   52659 system_pods.go:61] "kube-proxy-n8sj4" [8270be48-dc52-42f3-9473-7f892be5d141] Running
	I0719 15:35:32.750143   52659 system_pods.go:61] "kube-scheduler-pause-464954" [09cba1d2-cb9c-44b2-bab8-be9268c590dd] Running
	I0719 15:35:32.750151   52659 system_pods.go:74] duration metric: took 181.545926ms to wait for pod list to return data ...
	I0719 15:35:32.750162   52659 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:35:32.946909   52659 default_sa.go:45] found service account: "default"
	I0719 15:35:32.946936   52659 default_sa.go:55] duration metric: took 196.762238ms for default service account to be created ...
	I0719 15:35:32.946946   52659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:35:33.151169   52659 system_pods.go:86] 6 kube-system pods found
	I0719 15:35:33.151203   52659 system_pods.go:89] "coredns-7db6d8ff4d-5625x" [cd203d01-af11-4b86-87be-6fbb2d51114f] Running
	I0719 15:35:33.151211   52659 system_pods.go:89] "etcd-pause-464954" [ae13a722-5ab7-4535-ac4c-2c4c647c3cbb] Running
	I0719 15:35:33.151217   52659 system_pods.go:89] "kube-apiserver-pause-464954" [ff5fbacc-f26a-4bad-97b4-4229ae279255] Running
	I0719 15:35:33.151225   52659 system_pods.go:89] "kube-controller-manager-pause-464954" [8e69da1f-5849-4a11-a0bd-9482eb6c393b] Running
	I0719 15:35:33.151232   52659 system_pods.go:89] "kube-proxy-n8sj4" [8270be48-dc52-42f3-9473-7f892be5d141] Running
	I0719 15:35:33.151237   52659 system_pods.go:89] "kube-scheduler-pause-464954" [09cba1d2-cb9c-44b2-bab8-be9268c590dd] Running
	I0719 15:35:33.151247   52659 system_pods.go:126] duration metric: took 204.293864ms to wait for k8s-apps to be running ...
	I0719 15:35:33.151259   52659 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:35:33.151311   52659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:35:33.165939   52659 system_svc.go:56] duration metric: took 14.667219ms WaitForService to wait for kubelet
	I0719 15:35:33.165968   52659 kubeadm.go:582] duration metric: took 2.986140486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:35:33.165988   52659 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:35:33.346754   52659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:35:33.346782   52659 node_conditions.go:123] node cpu capacity is 2
	I0719 15:35:33.346796   52659 node_conditions.go:105] duration metric: took 180.802533ms to run NodePressure ...
	I0719 15:35:33.346809   52659 start.go:241] waiting for startup goroutines ...
	I0719 15:35:33.346817   52659 start.go:246] waiting for cluster config update ...
	I0719 15:35:33.346826   52659 start.go:255] writing updated cluster config ...
	I0719 15:35:33.347148   52659 ssh_runner.go:195] Run: rm -f paused
	I0719 15:35:33.397789   52659 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:35:33.401128   52659 out.go:177] * Done! kubectl is now configured to use "pause-464954" cluster and "default" namespace by default
	I0719 15:35:28.763768   53710 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:35:28.763807   53710 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:35:28.763813   53710 cache.go:56] Caching tarball of preloaded images
	I0719 15:35:28.763942   53710 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:35:28.763952   53710 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:35:28.764082   53710 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/cert-expiration-939600/config.json ...
	I0719 15:35:28.764101   53710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/cert-expiration-939600/config.json: {Name:mk6f325f8701bdb25793c74efa5b0bf35b7dd53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:28.764285   53710 start.go:360] acquireMachinesLock for cert-expiration-939600: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.093825087Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5625x,Uid:cd203d01-af11-4b86-87be-6fbb2d51114f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403310021448782,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T15:33:51.890075254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&PodSandboxMetadata{Name:etcd-pause-464954,Uid:f91766c941f04e14ac6d6dacc5c79622,Namespace:kube-system,Attempt:2,
},State:SANDBOX_READY,CreatedAt:1721403309852286635,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.48:2379,kubernetes.io/config.hash: f91766c941f04e14ac6d6dacc5c79622,kubernetes.io/config.seen: 2024-07-19T15:33:35.962158207Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-464954,Uid:6bb294852599915b4478e6ae4ddaeb70,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309850695215,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6bb294852599915b4478e6ae4ddaeb70,kubernetes.io/config.seen: 2024-07-19T15:33:35.962162948Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-464954,Uid:b73c4897e47622ac0ea00e6bd07949b0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309831307359,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.48:8443,kubernetes.io/config.hash: b73c4897e47622ac0ea00e6bd07949b0,kubernetes.io/config.seen: 2024-07-19T15
:33:35.962161968Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&PodSandboxMetadata{Name:kube-proxy-n8sj4,Uid:8270be48-dc52-42f3-9473-7f892be5d141,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309822256208,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T15:33:51.839224315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-464954,Uid:a8423758431568fd541c359de99e2cfb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309695079778,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a8423758431568fd541c359de99e2cfb,kubernetes.io/config.seen: 2024-07-19T15:33:35.962163736Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f760ff2d-1915-4fa4-84f1-41d9cf0d4be6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.094683312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5317d89a-c415-42a4-921a-a681503ec067 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.094800200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5317d89a-c415-42a4-921a-a681503ec067 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.094959091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5317d89a-c415-42a4-921a-a681503ec067 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.118640744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d865952-3b8c-4b32-aae0-18e4d2212973 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.118755997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d865952-3b8c-4b32-aae0-18e4d2212973 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.120805665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f545702b-366b-4ceb-8d08-3e04fe348563 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.121341298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403334121310530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f545702b-366b-4ceb-8d08-3e04fe348563 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.122154357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b6d6ac9-b7f4-49dc-bad1-5ec8a64ee85c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.122255863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b6d6ac9-b7f4-49dc-bad1-5ec8a64ee85c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.122731819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2,PodSandboxId:7663e80c0f4dd51d8073f3cb547e3e8d1c2e5a9e4ea051999bc2153b25f37641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403307828116822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075
d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba,PodSandboxId:6d4f2b7fe71772d580684075632baec6c86de5daaf9af1bb89104db197596d88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721403306985816286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66,PodSandboxId:510d89f77a561b286e5f526cd4c12415fb57e257215a47f43bbfddda1dd37770,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721403307013387665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee,PodSandboxId:b8bfebac6339e10e1a074cf9c0465d5dfe19e3ce90f89e3879acb1ef6e8a4bbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721403307064800543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annotations:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec,PodSandboxId:f58bb9e6c088bf596eb35920805ee00879f7725fefddda9f1ee067bb171906a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721403306996174990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f,PodSandboxId:a3a1cda35809441442f80b03c7b27231170cc1112b15aeb657694abf2e751d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721403306701477340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b6d6ac9-b7f4-49dc-bad1-5ec8a64ee85c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.174889047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c3e2a3c-7f33-4acb-afb4-8843e30bd710 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.175007733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c3e2a3c-7f33-4acb-afb4-8843e30bd710 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.176436913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1c18a0b-9648-475b-95fe-c051727c25d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.177133904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403334177101716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1c18a0b-9648-475b-95fe-c051727c25d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.177894563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14568e8e-dfc7-491d-848d-6ef2f8b05299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.177984026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14568e8e-dfc7-491d-848d-6ef2f8b05299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.178354800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2,PodSandboxId:7663e80c0f4dd51d8073f3cb547e3e8d1c2e5a9e4ea051999bc2153b25f37641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403307828116822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075
d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba,PodSandboxId:6d4f2b7fe71772d580684075632baec6c86de5daaf9af1bb89104db197596d88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721403306985816286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66,PodSandboxId:510d89f77a561b286e5f526cd4c12415fb57e257215a47f43bbfddda1dd37770,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721403307013387665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee,PodSandboxId:b8bfebac6339e10e1a074cf9c0465d5dfe19e3ce90f89e3879acb1ef6e8a4bbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721403307064800543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annotations:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec,PodSandboxId:f58bb9e6c088bf596eb35920805ee00879f7725fefddda9f1ee067bb171906a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721403306996174990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f,PodSandboxId:a3a1cda35809441442f80b03c7b27231170cc1112b15aeb657694abf2e751d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721403306701477340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14568e8e-dfc7-491d-848d-6ef2f8b05299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.232772732Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba401f33-098d-46cd-bf3e-10ce7588257d name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.232878399Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba401f33-098d-46cd-bf3e-10ce7588257d name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.234061859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df843fca-2f8b-484b-afb4-f971ae61d89c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.234502357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403334234479841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df843fca-2f8b-484b-afb4-f971ae61d89c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.235406741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8507ed27-7965-42b6-aeb1-465fdcb3ad6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.235480120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8507ed27-7965-42b6-aeb1-465fdcb3ad6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:34 pause-464954 crio[2839]: time="2024-07-19 15:35:34.235781287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2,PodSandboxId:7663e80c0f4dd51d8073f3cb547e3e8d1c2e5a9e4ea051999bc2153b25f37641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403307828116822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075
d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba,PodSandboxId:6d4f2b7fe71772d580684075632baec6c86de5daaf9af1bb89104db197596d88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721403306985816286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66,PodSandboxId:510d89f77a561b286e5f526cd4c12415fb57e257215a47f43bbfddda1dd37770,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721403307013387665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee,PodSandboxId:b8bfebac6339e10e1a074cf9c0465d5dfe19e3ce90f89e3879acb1ef6e8a4bbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721403307064800543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annotations:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec,PodSandboxId:f58bb9e6c088bf596eb35920805ee00879f7725fefddda9f1ee067bb171906a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721403306996174990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f,PodSandboxId:a3a1cda35809441442f80b03c7b27231170cc1112b15aeb657694abf2e751d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721403306701477340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8507ed27-7965-42b6-aeb1-465fdcb3ad6b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	acd242e881fc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   d73da3a8e90ab       coredns-7db6d8ff4d-5625x
	9a84983f82af7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 seconds ago      Running             kube-proxy                2                   3c184623589ae       kube-proxy-n8sj4
	d401ef43e1f83       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   c24e00a448b5d       etcd-pause-464954
	06dc71d651778       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago      Running             kube-controller-manager   2                   8534a96a1706d       kube-controller-manager-pause-464954
	e3e902406c4a1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago      Running             kube-apiserver            2                   079076e018478       kube-apiserver-pause-464954
	beb458bfa93b4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago      Running             kube-scheduler            2                   880d12a6b0640       kube-scheduler-pause-464954
	61f50cddbe874       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   26 seconds ago      Exited              coredns                   1                   7663e80c0f4dd       coredns-7db6d8ff4d-5625x
	9e025307d3d1a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Exited              etcd                      1                   b8bfebac6339e       etcd-pause-464954
	7ab1584527bf9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   27 seconds ago      Exited              kube-scheduler            1                   510d89f77a561       kube-scheduler-pause-464954
	2c64b1148ba42       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   27 seconds ago      Exited              kube-controller-manager   1                   f58bb9e6c088b       kube-controller-manager-pause-464954
	5d52898a84a22       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   27 seconds ago      Exited              kube-proxy                1                   6d4f2b7fe7177       kube-proxy-n8sj4
	e04f9cf0ba4d4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   27 seconds ago      Exited              kube-apiserver            1                   a3a1cda358094       kube-apiserver-pause-464954
	
	
	==> coredns [61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2] <==
	
	
	==> coredns [acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45520 - 15833 "HINFO IN 7889816300237899074.7747344064560749157. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012027463s
	
	
	==> describe nodes <==
	Name:               pause-464954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-464954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=pause-464954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_33_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:33:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-464954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:35:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.48
	  Hostname:    pause-464954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 93aa6d4f8d2347b884b833f9102e1718
	  System UUID:                93aa6d4f-8d23-47b8-84b8-33f9102e1718
	  Boot ID:                    efd369d2-d5d8-447f-a7cf-b6c85a867c48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5625x                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     103s
	  kube-system                 etcd-pause-464954                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-464954             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-pause-464954    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-n8sj4                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-scheduler-pause-464954             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     118s               kubelet          Node pause-464954 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node pause-464954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node pause-464954 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  118s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                117s               kubelet          Node pause-464954 status is now: NodeReady
	  Normal  RegisteredNode           106s               node-controller  Node pause-464954 event: Registered Node pause-464954 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-464954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-464954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-464954 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-464954 event: Registered Node pause-464954 in Controller
	
	
	==> dmesg <==
	[ +11.038211] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062485] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.083445] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.196754] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.165177] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.324389] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.791071] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.064649] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.068405] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.065827] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.503048] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.081100] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.257200] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +0.093284] kauditd_printk_skb: 21 callbacks suppressed
	[Jul19 15:34] kauditd_printk_skb: 69 callbacks suppressed
	[Jul19 15:35] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.173094] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.366005] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.388753] systemd-fstab-generator[2434]: Ignoring "noauto" option for root device
	[  +0.743276] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +1.336337] systemd-fstab-generator[3032]: Ignoring "noauto" option for root device
	[  +2.748945] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.082622] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.017460] kauditd_printk_skb: 48 callbacks suppressed
	[ +13.353129] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	
	
	==> etcd [9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee] <==
	{"level":"info","ts":"2024-07-19T15:35:07.547702Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.708817ms"}
	{"level":"info","ts":"2024-07-19T15:35:07.644378Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-19T15:35:07.68817Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","commit-index":430}
	{"level":"info","ts":"2024-07-19T15:35:07.688342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-19T15:35:07.68841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became follower at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:07.688459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 32802cd757072290 [peers: [], term: 2, commit: 430, applied: 0, lastindex: 430, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-19T15:35:07.708767Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-19T15:35:07.755631Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":404}
	{"level":"info","ts":"2024-07-19T15:35:07.764638Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-19T15:35:07.800282Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"32802cd757072290","timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:35:07.824024Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"32802cd757072290"}
	{"level":"info","ts":"2024-07-19T15:35:07.82415Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"32802cd757072290","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-19T15:35:07.824556Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-19T15:35:07.824688Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:07.824711Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:07.82472Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:07.824945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 switched to configuration voters=(3638957802305036944)"}
	{"level":"info","ts":"2024-07-19T15:35:07.824991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","added-peer-id":"32802cd757072290","added-peer-peer-urls":["https://192.168.83.48:2380"]}
	{"level":"info","ts":"2024-07-19T15:35:07.825079Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:07.8251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:07.885302Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:07.885328Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:07.88505Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:35:07.923799Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:35:07.923755Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"32802cd757072290","initial-advertise-peer-urls":["https://192.168.83.48:2380"],"listen-peer-urls":["https://192.168.83.48:2380"],"advertise-client-urls":["https://192.168.83.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> etcd [d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462] <==
	{"level":"info","ts":"2024-07-19T15:35:13.004105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:13.004167Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:13.004243Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:35:13.004474Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"32802cd757072290","initial-advertise-peer-urls":["https://192.168.83.48:2380"],"listen-peer-urls":["https://192.168.83.48:2380"],"advertise-client-urls":["https://192.168.83.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:35:13.00456Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:35:13.004612Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"32802cd757072290","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004696Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004736Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004757Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004933Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:13.004958Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:13.859607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:13.859746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:13.859813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 received MsgPreVoteResp from 32802cd757072290 at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:13.859854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.85988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 received MsgVoteResp from 32802cd757072290 at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.859909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.859951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32802cd757072290 elected leader 32802cd757072290 at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.871783Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"32802cd757072290","local-member-attributes":"{Name:pause-464954 ClientURLs:[https://192.168.83.48:2379]}","request-path":"/0/members/32802cd757072290/attributes","cluster-id":"1922bd1559689082","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:35:13.872689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:35:13.87308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:35:13.878307Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T15:35:13.878605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:35:13.87865Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:35:13.891375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.48:2379"}
	
	
	==> kernel <==
	 15:35:34 up 2 min,  0 users,  load average: 0.97, 0.44, 0.17
	Linux pause-464954 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f] <==
	I0719 15:35:07.442914       1 options.go:221] external host was not specified, using 192.168.83.48
	I0719 15:35:07.443728       1 server.go:148] Version: v1.30.3
	I0719 15:35:07.443760       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97] <==
	I0719 15:35:16.010387       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 15:35:16.012882       1 aggregator.go:165] initial CRD sync complete...
	I0719 15:35:16.013028       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 15:35:16.013142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 15:35:16.013224       1 cache.go:39] Caches are synced for autoregister controller
	I0719 15:35:16.038015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 15:35:16.038168       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 15:35:16.038226       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 15:35:16.039017       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 15:35:16.039318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 15:35:16.040495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 15:35:16.050614       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0719 15:35:16.061579       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 15:35:16.076780       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 15:35:16.077987       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 15:35:16.078025       1 policy_source.go:224] refreshing policies
	I0719 15:35:16.142099       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 15:35:16.845206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 15:35:17.478308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 15:35:17.506879       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 15:35:17.559455       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 15:35:17.595083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 15:35:17.602317       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 15:35:28.918671       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 15:35:29.019363       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8] <==
	I0719 15:35:28.728807       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0719 15:35:28.728857       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0719 15:35:28.728897       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0719 15:35:28.728908       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0719 15:35:28.735475       1 shared_informer.go:320] Caches are synced for job
	I0719 15:35:28.739843       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 15:35:28.749290       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 15:35:28.751721       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 15:35:28.753010       1 shared_informer.go:320] Caches are synced for crt configmap
	I0719 15:35:28.754248       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0719 15:35:28.754327       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0719 15:35:28.760122       1 shared_informer.go:320] Caches are synced for persistent volume
	I0719 15:35:28.767161       1 shared_informer.go:320] Caches are synced for disruption
	I0719 15:35:28.769559       1 shared_informer.go:320] Caches are synced for ephemeral
	I0719 15:35:28.772883       1 shared_informer.go:320] Caches are synced for GC
	I0719 15:35:28.778427       1 shared_informer.go:320] Caches are synced for deployment
	I0719 15:35:28.786567       1 shared_informer.go:320] Caches are synced for HPA
	I0719 15:35:28.906995       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 15:35:28.932379       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 15:35:28.956483       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 15:35:28.965146       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 15:35:28.976599       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 15:35:29.378771       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:35:29.417278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:35:29.417333       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec] <==
	
	
	==> kube-proxy [5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba] <==
	
	
	==> kube-proxy [9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6] <==
	I0719 15:35:16.653073       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:35:16.663261       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.83.48"]
	I0719 15:35:16.708075       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:35:16.708151       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:35:16.708174       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:35:16.712629       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:35:16.712990       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:35:16.713024       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:35:16.714966       1 config.go:192] "Starting service config controller"
	I0719 15:35:16.715013       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:35:16.715045       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:35:16.715050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:35:16.715908       1 config.go:319] "Starting node config controller"
	I0719 15:35:16.715946       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:35:16.815684       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:35:16.815805       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:35:16.816124       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66] <==
	
	
	==> kube-scheduler [beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a] <==
	I0719 15:35:13.841834       1 serving.go:380] Generated self-signed cert in-memory
	W0719 15:35:15.937979       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:35:15.938093       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:35:15.938129       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:35:15.938159       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:35:15.983172       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:35:15.983308       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:35:15.988788       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:35:15.988968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:35:15.990911       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:35:15.992624       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0719 15:35:15.997625       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 15:35:15.997765       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 15:35:16.891645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.266829    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b73c4897e47622ac0ea00e6bd07949b0-ca-certs\") pod \"kube-apiserver-pause-464954\" (UID: \"b73c4897e47622ac0ea00e6bd07949b0\") " pod="kube-system/kube-apiserver-pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.266841    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b73c4897e47622ac0ea00e6bd07949b0-k8s-certs\") pod \"kube-apiserver-pause-464954\" (UID: \"b73c4897e47622ac0ea00e6bd07949b0\") " pod="kube-system/kube-apiserver-pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.354678    3482 kubelet_node_status.go:73] "Attempting to register node" node="pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: E0719 15:35:12.355956    3482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.48:8443: connect: connection refused" node="pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.494341    3482 scope.go:117] "RemoveContainer" containerID="9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.495443    3482 scope.go:117] "RemoveContainer" containerID="e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.497653    3482 scope.go:117] "RemoveContainer" containerID="7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.498079    3482 scope.go:117] "RemoveContainer" containerID="2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: E0719 15:35:12.662885    3482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-464954?timeout=10s\": dial tcp 192.168.83.48:8443: connect: connection refused" interval="800ms"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.758919    3482 kubelet_node_status.go:73] "Attempting to register node" node="pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: E0719 15:35:12.761976    3482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.48:8443: connect: connection refused" node="pause-464954"
	Jul 19 15:35:13 pause-464954 kubelet[3482]: I0719 15:35:13.564433    3482 kubelet_node_status.go:73] "Attempting to register node" node="pause-464954"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.039930    3482 apiserver.go:52] "Watching apiserver"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.046082    3482 topology_manager.go:215] "Topology Admit Handler" podUID="cd203d01-af11-4b86-87be-6fbb2d51114f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5625x"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.046369    3482 topology_manager.go:215] "Topology Admit Handler" podUID="8270be48-dc52-42f3-9473-7f892be5d141" podNamespace="kube-system" podName="kube-proxy-n8sj4"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.055866    3482 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.129724    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8270be48-dc52-42f3-9473-7f892be5d141-xtables-lock\") pod \"kube-proxy-n8sj4\" (UID: \"8270be48-dc52-42f3-9473-7f892be5d141\") " pod="kube-system/kube-proxy-n8sj4"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.129833    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8270be48-dc52-42f3-9473-7f892be5d141-lib-modules\") pod \"kube-proxy-n8sj4\" (UID: \"8270be48-dc52-42f3-9473-7f892be5d141\") " pod="kube-system/kube-proxy-n8sj4"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.193899    3482 kubelet_node_status.go:112] "Node was previously registered" node="pause-464954"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.194041    3482 kubelet_node_status.go:76] "Successfully registered node" node="pause-464954"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.196726    3482 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.198100    3482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.347747    3482 scope.go:117] "RemoveContainer" containerID="5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.348552    3482 scope.go:117] "RemoveContainer" containerID="61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2"
	Jul 19 15:35:25 pause-464954 kubelet[3482]: I0719 15:35:25.202481    3482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-464954 -n pause-464954
helpers_test.go:261: (dbg) Run:  kubectl --context pause-464954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-464954 -n pause-464954
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-464954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-464954 logs -n 25: (1.336315976s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo cat                            | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo                                | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo find                           | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-526259 sudo crio                           | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-526259                                     | cilium-526259             | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:34 UTC |
	| start   | -p force-systemd-env-802753                          | force-systemd-env-802753  | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-464954                                      | pause-464954              | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-490845 sudo                          | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-490845                               | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:34 UTC |
	| start   | -p NoKubernetes-490845                               | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:34 UTC | 19 Jul 24 15:35 UTC |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-574044                         | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| delete  | -p force-systemd-env-802753                          | force-systemd-env-802753  | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p kubernetes-upgrade-574044                         | kubernetes-upgrade-574044 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-632791                         | force-systemd-flag-632791 | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-490845 sudo                          | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-490845                               | NoKubernetes-490845       | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC | 19 Jul 24 15:35 UTC |
	| start   | -p cert-expiration-939600                            | cert-expiration-939600    | jenkins | v1.33.1 | 19 Jul 24 15:35 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:35:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:35:28.686915   53710 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:35:28.687136   53710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:35:28.687139   53710 out.go:304] Setting ErrFile to fd 2...
	I0719 15:35:28.687142   53710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:35:28.687320   53710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:35:28.687865   53710 out.go:298] Setting JSON to false
	I0719 15:35:28.688779   53710 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4675,"bootTime":1721398654,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:35:28.688829   53710 start.go:139] virtualization: kvm guest
	I0719 15:35:28.690920   53710 out.go:177] * [cert-expiration-939600] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:35:28.692709   53710 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:35:28.692748   53710 notify.go:220] Checking for updates...
	I0719 15:35:28.695513   53710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:35:28.696823   53710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:35:28.698009   53710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:35:28.699197   53710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:35:28.700371   53710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:35:28.701919   53710 config.go:182] Loaded profile config "force-systemd-flag-632791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:35:28.702000   53710 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:35:28.702100   53710 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:35:28.702171   53710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:35:28.742002   53710 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 15:35:28.743520   53710 start.go:297] selected driver: kvm2
	I0719 15:35:28.743538   53710 start.go:901] validating driver "kvm2" against <nil>
	I0719 15:35:28.743551   53710 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:35:28.744405   53710 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:35:28.744486   53710 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:35:28.759822   53710 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:35:28.759873   53710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 15:35:28.760084   53710 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 15:35:28.760100   53710 cni.go:84] Creating CNI manager for ""
	I0719 15:35:28.760106   53710 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:35:28.760114   53710 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:35:28.760172   53710 start.go:340] cluster config:
	{Name:cert-expiration-939600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-939600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:35:28.760300   53710 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:35:28.762327   53710 out.go:177] * Starting "cert-expiration-939600" primary control-plane node in "cert-expiration-939600" cluster
	I0719 15:35:27.650793   52659 pod_ready.go:102] pod "kube-apiserver-pause-464954" in "kube-system" namespace has status "Ready":"False"
	I0719 15:35:30.149185   52659 pod_ready.go:92] pod "kube-apiserver-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.149209   52659 pod_ready.go:81] duration metric: took 4.506441711s for pod "kube-apiserver-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.149222   52659 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.153847   52659 pod_ready.go:92] pod "kube-controller-manager-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.153868   52659 pod_ready.go:81] duration metric: took 4.638428ms for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.153879   52659 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.160172   52659 pod_ready.go:92] pod "kube-proxy-n8sj4" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.160191   52659 pod_ready.go:81] duration metric: took 6.304567ms for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.160201   52659 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.164536   52659 pod_ready.go:92] pod "kube-scheduler-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.164553   52659 pod_ready.go:81] duration metric: took 4.345601ms for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.164562   52659 pod_ready.go:38] duration metric: took 12.540856801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:35:30.164582   52659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:35:30.178922   52659 ops.go:34] apiserver oom_adj: -16
	I0719 15:35:30.178943   52659 kubeadm.go:597] duration metric: took 19.957095987s to restartPrimaryControlPlane
	I0719 15:35:30.178950   52659 kubeadm.go:394] duration metric: took 20.124522774s to StartCluster
	I0719 15:35:30.178965   52659 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:30.179026   52659 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:35:30.179597   52659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:30.179803   52659 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.83.48 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:35:30.179885   52659 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:35:30.180103   52659 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:35:30.181625   52659 out.go:177] * Enabled addons: 
	I0719 15:35:30.181633   52659 out.go:177] * Verifying Kubernetes components...
	I0719 15:35:30.182758   52659 addons.go:510] duration metric: took 2.876402ms for enable addons: enabled=[]
	I0719 15:35:30.182789   52659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:35:30.372160   52659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:35:30.387851   52659 node_ready.go:35] waiting up to 6m0s for node "pause-464954" to be "Ready" ...
	I0719 15:35:30.390945   52659 node_ready.go:49] node "pause-464954" has status "Ready":"True"
	I0719 15:35:30.390966   52659 node_ready.go:38] duration metric: took 3.088506ms for node "pause-464954" to be "Ready" ...
	I0719 15:35:30.390976   52659 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:35:30.397434   52659 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5625x" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.546140   52659 pod_ready.go:92] pod "coredns-7db6d8ff4d-5625x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.546166   52659 pod_ready.go:81] duration metric: took 148.708319ms for pod "coredns-7db6d8ff4d-5625x" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.546176   52659 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:28.589092   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:28.589502   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:28.589526   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:28.589470   53543 retry.go:31] will retry after 584.355018ms: waiting for machine to come up
	I0719 15:35:29.174891   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.175605   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.175632   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:29.175532   53543 retry.go:31] will retry after 783.407425ms: waiting for machine to come up
	I0719 15:35:29.960543   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.961079   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:29.961102   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:29.961041   53543 retry.go:31] will retry after 1.119754414s: waiting for machine to come up
	I0719 15:35:31.082605   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:31.083054   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:31.083091   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:31.083013   53543 retry.go:31] will retry after 1.172135057s: waiting for machine to come up
	I0719 15:35:32.257369   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | domain kubernetes-upgrade-574044 has defined MAC address 52:54:00:0a:cf:68 in network mk-kubernetes-upgrade-574044
	I0719 15:35:32.257783   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | unable to find current IP address of domain kubernetes-upgrade-574044 in network mk-kubernetes-upgrade-574044
	I0719 15:35:32.257803   53466 main.go:141] libmachine: (kubernetes-upgrade-574044) DBG | I0719 15:35:32.257752   53543 retry.go:31] will retry after 1.253346183s: waiting for machine to come up
	I0719 15:35:30.947121   52659 pod_ready.go:92] pod "etcd-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:30.947148   52659 pod_ready.go:81] duration metric: took 400.963831ms for pod "etcd-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:30.947162   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.346896   52659 pod_ready.go:92] pod "kube-apiserver-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:31.346921   52659 pod_ready.go:81] duration metric: took 399.750861ms for pod "kube-apiserver-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.346935   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.746901   52659 pod_ready.go:92] pod "kube-controller-manager-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:31.746923   52659 pod_ready.go:81] duration metric: took 399.980014ms for pod "kube-controller-manager-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:31.746935   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.151350   52659 pod_ready.go:92] pod "kube-proxy-n8sj4" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:32.151376   52659 pod_ready.go:81] duration metric: took 404.433709ms for pod "kube-proxy-n8sj4" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.151387   52659 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.547604   52659 pod_ready.go:92] pod "kube-scheduler-pause-464954" in "kube-system" namespace has status "Ready":"True"
	I0719 15:35:32.547634   52659 pod_ready.go:81] duration metric: took 396.237781ms for pod "kube-scheduler-pause-464954" in "kube-system" namespace to be "Ready" ...
	I0719 15:35:32.547646   52659 pod_ready.go:38] duration metric: took 2.156654213s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:35:32.547662   52659 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:35:32.547713   52659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:35:32.563294   52659 api_server.go:72] duration metric: took 2.383461605s to wait for apiserver process to appear ...
	I0719 15:35:32.563322   52659 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:35:32.563341   52659 api_server.go:253] Checking apiserver healthz at https://192.168.83.48:8443/healthz ...
	I0719 15:35:32.567385   52659 api_server.go:279] https://192.168.83.48:8443/healthz returned 200:
	ok
	I0719 15:35:32.568578   52659 api_server.go:141] control plane version: v1.30.3
	I0719 15:35:32.568592   52659 api_server.go:131] duration metric: took 5.264715ms to wait for apiserver health ...
	I0719 15:35:32.568599   52659 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:35:32.750080   52659 system_pods.go:59] 6 kube-system pods found
	I0719 15:35:32.750115   52659 system_pods.go:61] "coredns-7db6d8ff4d-5625x" [cd203d01-af11-4b86-87be-6fbb2d51114f] Running
	I0719 15:35:32.750122   52659 system_pods.go:61] "etcd-pause-464954" [ae13a722-5ab7-4535-ac4c-2c4c647c3cbb] Running
	I0719 15:35:32.750128   52659 system_pods.go:61] "kube-apiserver-pause-464954" [ff5fbacc-f26a-4bad-97b4-4229ae279255] Running
	I0719 15:35:32.750135   52659 system_pods.go:61] "kube-controller-manager-pause-464954" [8e69da1f-5849-4a11-a0bd-9482eb6c393b] Running
	I0719 15:35:32.750139   52659 system_pods.go:61] "kube-proxy-n8sj4" [8270be48-dc52-42f3-9473-7f892be5d141] Running
	I0719 15:35:32.750143   52659 system_pods.go:61] "kube-scheduler-pause-464954" [09cba1d2-cb9c-44b2-bab8-be9268c590dd] Running
	I0719 15:35:32.750151   52659 system_pods.go:74] duration metric: took 181.545926ms to wait for pod list to return data ...
	I0719 15:35:32.750162   52659 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:35:32.946909   52659 default_sa.go:45] found service account: "default"
	I0719 15:35:32.946936   52659 default_sa.go:55] duration metric: took 196.762238ms for default service account to be created ...
	I0719 15:35:32.946946   52659 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:35:33.151169   52659 system_pods.go:86] 6 kube-system pods found
	I0719 15:35:33.151203   52659 system_pods.go:89] "coredns-7db6d8ff4d-5625x" [cd203d01-af11-4b86-87be-6fbb2d51114f] Running
	I0719 15:35:33.151211   52659 system_pods.go:89] "etcd-pause-464954" [ae13a722-5ab7-4535-ac4c-2c4c647c3cbb] Running
	I0719 15:35:33.151217   52659 system_pods.go:89] "kube-apiserver-pause-464954" [ff5fbacc-f26a-4bad-97b4-4229ae279255] Running
	I0719 15:35:33.151225   52659 system_pods.go:89] "kube-controller-manager-pause-464954" [8e69da1f-5849-4a11-a0bd-9482eb6c393b] Running
	I0719 15:35:33.151232   52659 system_pods.go:89] "kube-proxy-n8sj4" [8270be48-dc52-42f3-9473-7f892be5d141] Running
	I0719 15:35:33.151237   52659 system_pods.go:89] "kube-scheduler-pause-464954" [09cba1d2-cb9c-44b2-bab8-be9268c590dd] Running
	I0719 15:35:33.151247   52659 system_pods.go:126] duration metric: took 204.293864ms to wait for k8s-apps to be running ...
	I0719 15:35:33.151259   52659 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:35:33.151311   52659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:35:33.165939   52659 system_svc.go:56] duration metric: took 14.667219ms WaitForService to wait for kubelet
	I0719 15:35:33.165968   52659 kubeadm.go:582] duration metric: took 2.986140486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:35:33.165988   52659 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:35:33.346754   52659 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:35:33.346782   52659 node_conditions.go:123] node cpu capacity is 2
	I0719 15:35:33.346796   52659 node_conditions.go:105] duration metric: took 180.802533ms to run NodePressure ...
	I0719 15:35:33.346809   52659 start.go:241] waiting for startup goroutines ...
	I0719 15:35:33.346817   52659 start.go:246] waiting for cluster config update ...
	I0719 15:35:33.346826   52659 start.go:255] writing updated cluster config ...
	I0719 15:35:33.347148   52659 ssh_runner.go:195] Run: rm -f paused
	I0719 15:35:33.397789   52659 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:35:33.401128   52659 out.go:177] * Done! kubectl is now configured to use "pause-464954" cluster and "default" namespace by default
	I0719 15:35:28.763768   53710 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:35:28.763807   53710 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:35:28.763813   53710 cache.go:56] Caching tarball of preloaded images
	I0719 15:35:28.763942   53710 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:35:28.763952   53710 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:35:28.764082   53710 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/cert-expiration-939600/config.json ...
	I0719 15:35:28.764101   53710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/cert-expiration-939600/config.json: {Name:mk6f325f8701bdb25793c74efa5b0bf35b7dd53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:35:28.764285   53710 start.go:360] acquireMachinesLock for cert-expiration-939600: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.092829988Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5625x,Uid:cd203d01-af11-4b86-87be-6fbb2d51114f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403310021448782,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T15:33:51.890075254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&PodSandboxMetadata{Name:etcd-pause-464954,Uid:f91766c941f04e14ac6d6dacc5c79622,Namespace:kube-system,Attempt:2,
},State:SANDBOX_READY,CreatedAt:1721403309852286635,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.48:2379,kubernetes.io/config.hash: f91766c941f04e14ac6d6dacc5c79622,kubernetes.io/config.seen: 2024-07-19T15:33:35.962158207Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-464954,Uid:6bb294852599915b4478e6ae4ddaeb70,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309850695215,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6bb294852599915b4478e6ae4ddaeb70,kubernetes.io/config.seen: 2024-07-19T15:33:35.962162948Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-464954,Uid:b73c4897e47622ac0ea00e6bd07949b0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309831307359,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.48:8443,kubernetes.io/config.hash: b73c4897e47622ac0ea00e6bd07949b0,kubernetes.io/config.seen: 2024-07-19T15
:33:35.962161968Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&PodSandboxMetadata{Name:kube-proxy-n8sj4,Uid:8270be48-dc52-42f3-9473-7f892be5d141,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309822256208,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T15:33:51.839224315Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-464954,Uid:a8423758431568fd541c359de99e2cfb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721403309695079778,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a8423758431568fd541c359de99e2cfb,kubernetes.io/config.seen: 2024-07-19T15:33:35.962163736Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=166c86df-c400-45ae-b795-8d381bee4895 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.093397730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4911b63-9ffe-4bf6-900b-a6b1b2e15295 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.093449636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4911b63-9ffe-4bf6-900b-a6b1b2e15295 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.093870661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4911b63-9ffe-4bf6-900b-a6b1b2e15295 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.097889846Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7929f7c6-ab53-4d38-9569-ee39b628a1a4 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.097941219Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7929f7c6-ab53-4d38-9569-ee39b628a1a4 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.099935850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb8ce3fb-eaee-40ab-ab3f-7ffc14e61311 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.100271160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403336100252554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb8ce3fb-eaee-40ab-ab3f-7ffc14e61311 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.100838901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2252dbf8-1d8b-4d70-bbc9-5a13c9a2c74b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.100917009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2252dbf8-1d8b-4d70-bbc9-5a13c9a2c74b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.101137872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2,PodSandboxId:7663e80c0f4dd51d8073f3cb547e3e8d1c2e5a9e4ea051999bc2153b25f37641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403307828116822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075
d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba,PodSandboxId:6d4f2b7fe71772d580684075632baec6c86de5daaf9af1bb89104db197596d88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721403306985816286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66,PodSandboxId:510d89f77a561b286e5f526cd4c12415fb57e257215a47f43bbfddda1dd37770,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721403307013387665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee,PodSandboxId:b8bfebac6339e10e1a074cf9c0465d5dfe19e3ce90f89e3879acb1ef6e8a4bbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721403307064800543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annotations:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec,PodSandboxId:f58bb9e6c088bf596eb35920805ee00879f7725fefddda9f1ee067bb171906a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721403306996174990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f,PodSandboxId:a3a1cda35809441442f80b03c7b27231170cc1112b15aeb657694abf2e751d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721403306701477340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2252dbf8-1d8b-4d70-bbc9-5a13c9a2c74b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.144681470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c392d97-a82c-4632-b78f-c60548e903c2 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.144767386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c392d97-a82c-4632-b78f-c60548e903c2 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.146440978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6bb6359-b8ad-477b-8cfa-389ff2a00b05 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.147107863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403336147082891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6bb6359-b8ad-477b-8cfa-389ff2a00b05 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.147768698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6edd5b74-17ab-4f21-81d9-1e74de8d03af name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.147818181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6edd5b74-17ab-4f21-81d9-1e74de8d03af name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.148068734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2,PodSandboxId:7663e80c0f4dd51d8073f3cb547e3e8d1c2e5a9e4ea051999bc2153b25f37641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403307828116822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075
d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba,PodSandboxId:6d4f2b7fe71772d580684075632baec6c86de5daaf9af1bb89104db197596d88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721403306985816286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66,PodSandboxId:510d89f77a561b286e5f526cd4c12415fb57e257215a47f43bbfddda1dd37770,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721403307013387665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee,PodSandboxId:b8bfebac6339e10e1a074cf9c0465d5dfe19e3ce90f89e3879acb1ef6e8a4bbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721403307064800543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annotations:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec,PodSandboxId:f58bb9e6c088bf596eb35920805ee00879f7725fefddda9f1ee067bb171906a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721403306996174990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f,PodSandboxId:a3a1cda35809441442f80b03c7b27231170cc1112b15aeb657694abf2e751d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721403306701477340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6edd5b74-17ab-4f21-81d9-1e74de8d03af name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.200815038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0db538a8-4c80-441a-8ebb-6532630c8ff9 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.200983981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0db538a8-4c80-441a-8ebb-6532630c8ff9 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.203235440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6980333-931f-492c-a6e5-eb047893ccca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.203673815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721403336203648962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6980333-931f-492c-a6e5-eb047893ccca name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.204276898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdc2a581-8a5b-48a3-ae50-865f988220ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.204347157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdc2a581-8a5b-48a3-ae50-865f988220ba name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:35:36 pause-464954 crio[2839]: time="2024-07-19 15:35:36.204659985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6,PodSandboxId:3c184623589ae9e9f801c1f02783c41351b13a4290fba01de0e154a07a8b6691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721403316369279931,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be,PodSandboxId:d73da3a8e90ab07f0242ae30a67bc3e41527f3f06e4cf2e1517b8bc47c31a2bc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721403316383376408,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462,PodSandboxId:c24e00a448b5d38c793cd2e9ba0a82f23b37ec2c01911436c6178b68a58c34ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721403312565799609,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8,PodSandboxId:8534a96a1706df43014e1e50aa1e5dae8b10281fd2a6bb2e2ff5ef03c7745551,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721403312553895276,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70
,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a,PodSandboxId:880d12a6b064010d1c948bdbaaa75b00261b679201ed69e095d2a58ef031c7df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721403312522235846,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97,PodSandboxId:079076e018478a8e5564ba6e6ef336dc8c3219c39871107d734d9ee7f5a3863c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721403312535305887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.
kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2,PodSandboxId:7663e80c0f4dd51d8073f3cb547e3e8d1c2e5a9e4ea051999bc2153b25f37641,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721403307828116822,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5625x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd203d01-af11-4b86-87be-6fbb2d51114f,},Annotations:map[string]string{io.kubernetes.container.hash: 70075
d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba,PodSandboxId:6d4f2b7fe71772d580684075632baec6c86de5daaf9af1bb89104db197596d88,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721403306985816286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-n8sj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8270be48-dc52-42f3-9473-7f892be5d141,},Annotations:map[string]string{io.kubernetes.container.hash: cceee8fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66,PodSandboxId:510d89f77a561b286e5f526cd4c12415fb57e257215a47f43bbfddda1dd37770,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721403307013387665,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause
-464954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8423758431568fd541c359de99e2cfb,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee,PodSandboxId:b8bfebac6339e10e1a074cf9c0465d5dfe19e3ce90f89e3879acb1ef6e8a4bbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721403307064800543,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464954,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: f91766c941f04e14ac6d6dacc5c79622,},Annotations:map[string]string{io.kubernetes.container.hash: 6d49c59c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec,PodSandboxId:f58bb9e6c088bf596eb35920805ee00879f7725fefddda9f1ee067bb171906a8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721403306996174990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464954,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6bb294852599915b4478e6ae4ddaeb70,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f,PodSandboxId:a3a1cda35809441442f80b03c7b27231170cc1112b15aeb657694abf2e751d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721403306701477340,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464954,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b73c4897e47622ac0ea00e6bd07949b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a67f5ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdc2a581-8a5b-48a3-ae50-865f988220ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	acd242e881fc6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   d73da3a8e90ab       coredns-7db6d8ff4d-5625x
	9a84983f82af7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago      Running             kube-proxy                2                   3c184623589ae       kube-proxy-n8sj4
	d401ef43e1f83       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   c24e00a448b5d       etcd-pause-464954
	06dc71d651778       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago      Running             kube-controller-manager   2                   8534a96a1706d       kube-controller-manager-pause-464954
	e3e902406c4a1       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago      Running             kube-apiserver            2                   079076e018478       kube-apiserver-pause-464954
	beb458bfa93b4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago      Running             kube-scheduler            2                   880d12a6b0640       kube-scheduler-pause-464954
	61f50cddbe874       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   28 seconds ago      Exited              coredns                   1                   7663e80c0f4dd       coredns-7db6d8ff4d-5625x
	9e025307d3d1a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Exited              etcd                      1                   b8bfebac6339e       etcd-pause-464954
	7ab1584527bf9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   29 seconds ago      Exited              kube-scheduler            1                   510d89f77a561       kube-scheduler-pause-464954
	2c64b1148ba42       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   29 seconds ago      Exited              kube-controller-manager   1                   f58bb9e6c088b       kube-controller-manager-pause-464954
	5d52898a84a22       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   29 seconds ago      Exited              kube-proxy                1                   6d4f2b7fe7177       kube-proxy-n8sj4
	e04f9cf0ba4d4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   29 seconds ago      Exited              kube-apiserver            1                   a3a1cda358094       kube-apiserver-pause-464954
	
	
	==> coredns [61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2] <==
	
	
	==> coredns [acd242e881fc64b01a6eb4b691202f07d030e411df8e65618009b03a447da9be] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45520 - 15833 "HINFO IN 7889816300237899074.7747344064560749157. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012027463s
	
	
	==> describe nodes <==
	Name:               pause-464954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-464954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=pause-464954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_33_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:33:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-464954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 15:35:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:35:16 +0000   Fri, 19 Jul 2024 15:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.48
	  Hostname:    pause-464954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 93aa6d4f8d2347b884b833f9102e1718
	  System UUID:                93aa6d4f-8d23-47b8-84b8-33f9102e1718
	  Boot ID:                    efd369d2-d5d8-447f-a7cf-b6c85a867c48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5625x                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     105s
	  kube-system                 etcd-pause-464954                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-pause-464954             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-464954    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-proxy-n8sj4                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-pause-464954             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node pause-464954 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node pause-464954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node pause-464954 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                119s               kubelet          Node pause-464954 status is now: NodeReady
	  Normal  RegisteredNode           108s               node-controller  Node pause-464954 event: Registered Node pause-464954 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-464954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-464954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-464954 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-464954 event: Registered Node pause-464954 in Controller
	
	
	==> dmesg <==
	[ +11.038211] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062485] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.083445] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.196754] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.165177] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.324389] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +4.791071] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.064649] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.068405] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.065827] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.503048] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.081100] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.257200] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +0.093284] kauditd_printk_skb: 21 callbacks suppressed
	[Jul19 15:34] kauditd_printk_skb: 69 callbacks suppressed
	[Jul19 15:35] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.173094] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.366005] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.388753] systemd-fstab-generator[2434]: Ignoring "noauto" option for root device
	[  +0.743276] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +1.336337] systemd-fstab-generator[3032]: Ignoring "noauto" option for root device
	[  +2.748945] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.082622] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.017460] kauditd_printk_skb: 48 callbacks suppressed
	[ +13.353129] systemd-fstab-generator[3903]: Ignoring "noauto" option for root device
	
	
	==> etcd [9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee] <==
	{"level":"info","ts":"2024-07-19T15:35:07.547702Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.708817ms"}
	{"level":"info","ts":"2024-07-19T15:35:07.644378Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-19T15:35:07.68817Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","commit-index":430}
	{"level":"info","ts":"2024-07-19T15:35:07.688342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-19T15:35:07.68841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became follower at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:07.688459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 32802cd757072290 [peers: [], term: 2, commit: 430, applied: 0, lastindex: 430, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-19T15:35:07.708767Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-19T15:35:07.755631Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":404}
	{"level":"info","ts":"2024-07-19T15:35:07.764638Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-19T15:35:07.800282Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"32802cd757072290","timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:35:07.824024Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"32802cd757072290"}
	{"level":"info","ts":"2024-07-19T15:35:07.82415Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"32802cd757072290","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-19T15:35:07.824556Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-19T15:35:07.824688Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:07.824711Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:07.82472Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:07.824945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 switched to configuration voters=(3638957802305036944)"}
	{"level":"info","ts":"2024-07-19T15:35:07.824991Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","added-peer-id":"32802cd757072290","added-peer-peer-urls":["https://192.168.83.48:2380"]}
	{"level":"info","ts":"2024-07-19T15:35:07.825079Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:07.8251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:07.885302Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:07.885328Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:07.88505Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:35:07.923799Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:35:07.923755Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"32802cd757072290","initial-advertise-peer-urls":["https://192.168.83.48:2380"],"listen-peer-urls":["https://192.168.83.48:2380"],"advertise-client-urls":["https://192.168.83.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	
	
	==> etcd [d401ef43e1f8318f61154eab0596b6df617c4746db74cad88da2e77ec0e9e462] <==
	{"level":"info","ts":"2024-07-19T15:35:13.004105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1922bd1559689082","local-member-id":"32802cd757072290","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:13.004167Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:35:13.004243Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T15:35:13.004474Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"32802cd757072290","initial-advertise-peer-urls":["https://192.168.83.48:2380"],"listen-peer-urls":["https://192.168.83.48:2380"],"advertise-client-urls":["https://192.168.83.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:35:13.00456Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:35:13.004612Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"32802cd757072290","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004696Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004736Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004757Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T15:35:13.004933Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:13.004958Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.48:2380"}
	{"level":"info","ts":"2024-07-19T15:35:13.859607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:13.859746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:13.859813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 received MsgPreVoteResp from 32802cd757072290 at term 2"}
	{"level":"info","ts":"2024-07-19T15:35:13.859854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.85988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 received MsgVoteResp from 32802cd757072290 at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.859909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32802cd757072290 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.859951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32802cd757072290 elected leader 32802cd757072290 at term 3"}
	{"level":"info","ts":"2024-07-19T15:35:13.871783Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"32802cd757072290","local-member-attributes":"{Name:pause-464954 ClientURLs:[https://192.168.83.48:2379]}","request-path":"/0/members/32802cd757072290/attributes","cluster-id":"1922bd1559689082","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:35:13.872689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:35:13.87308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:35:13.878307Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T15:35:13.878605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:35:13.87865Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:35:13.891375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.48:2379"}
	
	
	==> kernel <==
	 15:35:36 up 2 min,  0 users,  load average: 0.89, 0.43, 0.17
	Linux pause-464954 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f] <==
	I0719 15:35:07.442914       1 options.go:221] external host was not specified, using 192.168.83.48
	I0719 15:35:07.443728       1 server.go:148] Version: v1.30.3
	I0719 15:35:07.443760       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [e3e902406c4a154203843dedd82508fba8e7319e8622f4f1da83f15532f9eb97] <==
	I0719 15:35:16.010387       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 15:35:16.012882       1 aggregator.go:165] initial CRD sync complete...
	I0719 15:35:16.013028       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 15:35:16.013142       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 15:35:16.013224       1 cache.go:39] Caches are synced for autoregister controller
	I0719 15:35:16.038015       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 15:35:16.038168       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 15:35:16.038226       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 15:35:16.039017       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 15:35:16.039318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 15:35:16.040495       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 15:35:16.050614       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0719 15:35:16.061579       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 15:35:16.076780       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 15:35:16.077987       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 15:35:16.078025       1 policy_source.go:224] refreshing policies
	I0719 15:35:16.142099       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 15:35:16.845206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 15:35:17.478308       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 15:35:17.506879       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 15:35:17.559455       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 15:35:17.595083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 15:35:17.602317       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 15:35:28.918671       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 15:35:29.019363       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [06dc71d6517781af210b74105eb497fa090cbb1364a3223e563618c91829a4d8] <==
	I0719 15:35:28.728807       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0719 15:35:28.728857       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0719 15:35:28.728897       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0719 15:35:28.728908       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0719 15:35:28.735475       1 shared_informer.go:320] Caches are synced for job
	I0719 15:35:28.739843       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 15:35:28.749290       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 15:35:28.751721       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 15:35:28.753010       1 shared_informer.go:320] Caches are synced for crt configmap
	I0719 15:35:28.754248       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0719 15:35:28.754327       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0719 15:35:28.760122       1 shared_informer.go:320] Caches are synced for persistent volume
	I0719 15:35:28.767161       1 shared_informer.go:320] Caches are synced for disruption
	I0719 15:35:28.769559       1 shared_informer.go:320] Caches are synced for ephemeral
	I0719 15:35:28.772883       1 shared_informer.go:320] Caches are synced for GC
	I0719 15:35:28.778427       1 shared_informer.go:320] Caches are synced for deployment
	I0719 15:35:28.786567       1 shared_informer.go:320] Caches are synced for HPA
	I0719 15:35:28.906995       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 15:35:28.932379       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 15:35:28.956483       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 15:35:28.965146       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 15:35:28.976599       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 15:35:29.378771       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:35:29.417278       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 15:35:29.417333       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec] <==
	
	
	==> kube-proxy [5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba] <==
	
	
	==> kube-proxy [9a84983f82af77745320300360badb03504ece7220cee09af7b588e5aaabe5b6] <==
	I0719 15:35:16.653073       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:35:16.663261       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.83.48"]
	I0719 15:35:16.708075       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:35:16.708151       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:35:16.708174       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:35:16.712629       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:35:16.712990       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:35:16.713024       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:35:16.714966       1 config.go:192] "Starting service config controller"
	I0719 15:35:16.715013       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:35:16.715045       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:35:16.715050       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:35:16.715908       1 config.go:319] "Starting node config controller"
	I0719 15:35:16.715946       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:35:16.815684       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:35:16.815805       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:35:16.816124       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66] <==
	
	
	==> kube-scheduler [beb458bfa93b452dc98a21629165b6c2969898eae31944489dacae26fde7652a] <==
	I0719 15:35:13.841834       1 serving.go:380] Generated self-signed cert in-memory
	W0719 15:35:15.937979       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:35:15.938093       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:35:15.938129       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:35:15.938159       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:35:15.983172       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:35:15.983308       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:35:15.988788       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:35:15.988968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:35:15.990911       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:35:15.992624       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0719 15:35:15.997625       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 15:35:15.997765       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 15:35:16.891645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.266829    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b73c4897e47622ac0ea00e6bd07949b0-ca-certs\") pod \"kube-apiserver-pause-464954\" (UID: \"b73c4897e47622ac0ea00e6bd07949b0\") " pod="kube-system/kube-apiserver-pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.266841    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b73c4897e47622ac0ea00e6bd07949b0-k8s-certs\") pod \"kube-apiserver-pause-464954\" (UID: \"b73c4897e47622ac0ea00e6bd07949b0\") " pod="kube-system/kube-apiserver-pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.354678    3482 kubelet_node_status.go:73] "Attempting to register node" node="pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: E0719 15:35:12.355956    3482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.48:8443: connect: connection refused" node="pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.494341    3482 scope.go:117] "RemoveContainer" containerID="9e025307d3d1a10c724a153fb02ccdf4ee759704d8d2bce6181c7d1ef3b8caee"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.495443    3482 scope.go:117] "RemoveContainer" containerID="e04f9cf0ba4d46e740210eb31629c34dacc74d464722affdd6f1c5d3b82cff7f"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.497653    3482 scope.go:117] "RemoveContainer" containerID="7ab1584527bf97a6c6a2785dab7bb7b77ae4d82269a1bc2cb8bd623012bdcd66"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.498079    3482 scope.go:117] "RemoveContainer" containerID="2c64b1148ba42880b201fd359875d7408756a258a67bdb8d26375229235702ec"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: E0719 15:35:12.662885    3482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-464954?timeout=10s\": dial tcp 192.168.83.48:8443: connect: connection refused" interval="800ms"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: I0719 15:35:12.758919    3482 kubelet_node_status.go:73] "Attempting to register node" node="pause-464954"
	Jul 19 15:35:12 pause-464954 kubelet[3482]: E0719 15:35:12.761976    3482 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.48:8443: connect: connection refused" node="pause-464954"
	Jul 19 15:35:13 pause-464954 kubelet[3482]: I0719 15:35:13.564433    3482 kubelet_node_status.go:73] "Attempting to register node" node="pause-464954"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.039930    3482 apiserver.go:52] "Watching apiserver"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.046082    3482 topology_manager.go:215] "Topology Admit Handler" podUID="cd203d01-af11-4b86-87be-6fbb2d51114f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5625x"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.046369    3482 topology_manager.go:215] "Topology Admit Handler" podUID="8270be48-dc52-42f3-9473-7f892be5d141" podNamespace="kube-system" podName="kube-proxy-n8sj4"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.055866    3482 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.129724    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8270be48-dc52-42f3-9473-7f892be5d141-xtables-lock\") pod \"kube-proxy-n8sj4\" (UID: \"8270be48-dc52-42f3-9473-7f892be5d141\") " pod="kube-system/kube-proxy-n8sj4"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.129833    3482 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8270be48-dc52-42f3-9473-7f892be5d141-lib-modules\") pod \"kube-proxy-n8sj4\" (UID: \"8270be48-dc52-42f3-9473-7f892be5d141\") " pod="kube-system/kube-proxy-n8sj4"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.193899    3482 kubelet_node_status.go:112] "Node was previously registered" node="pause-464954"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.194041    3482 kubelet_node_status.go:76] "Successfully registered node" node="pause-464954"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.196726    3482 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.198100    3482 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.347747    3482 scope.go:117] "RemoveContainer" containerID="5d52898a84a229b19007cdd756e4215a1ab0adaf99934721a9cca48932de81ba"
	Jul 19 15:35:16 pause-464954 kubelet[3482]: I0719 15:35:16.348552    3482 scope.go:117] "RemoveContainer" containerID="61f50cddbe874d98308b43eaa94742d08105106c288b45e162b6f11b5d070ea2"
	Jul 19 15:35:25 pause-464954 kubelet[3482]: I0719 15:35:25.202481    3482 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-464954 -n pause-464954
helpers_test.go:261: (dbg) Run:  kubectl --context pause-464954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (311.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-862924 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-862924 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m11.625190096s)

                                                
                                                
-- stdout --
	* [old-k8s-version-862924] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-862924" primary control-plane node in "old-k8s-version-862924" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:36:32.847529   54797 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:36:32.847808   54797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:36:32.847819   54797 out.go:304] Setting ErrFile to fd 2...
	I0719 15:36:32.847823   54797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:36:32.848034   54797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:36:32.848628   54797 out.go:298] Setting JSON to false
	I0719 15:36:32.849541   54797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4739,"bootTime":1721398654,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:36:32.849599   54797 start.go:139] virtualization: kvm guest
	I0719 15:36:32.851870   54797 out.go:177] * [old-k8s-version-862924] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:36:32.853593   54797 notify.go:220] Checking for updates...
	I0719 15:36:32.853612   54797 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:36:32.855050   54797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:36:32.856434   54797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:36:32.857713   54797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:36:32.858832   54797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:36:32.860196   54797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:36:32.861984   54797 config.go:182] Loaded profile config "cert-expiration-939600": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:36:32.862088   54797 config.go:182] Loaded profile config "cert-options-127438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:36:32.862161   54797 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:36:32.862265   54797 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:36:32.899693   54797 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 15:36:32.900947   54797 start.go:297] selected driver: kvm2
	I0719 15:36:32.900964   54797 start.go:901] validating driver "kvm2" against <nil>
	I0719 15:36:32.900974   54797 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:36:32.901622   54797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:36:32.901694   54797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:36:32.917551   54797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:36:32.917615   54797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 15:36:32.917818   54797 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:36:32.917850   54797 cni.go:84] Creating CNI manager for ""
	I0719 15:36:32.917857   54797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:36:32.917864   54797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:36:32.917915   54797 start.go:340] cluster config:
	{Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:36:32.918023   54797 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:36:32.919804   54797 out.go:177] * Starting "old-k8s-version-862924" primary control-plane node in "old-k8s-version-862924" cluster
	I0719 15:36:32.921206   54797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:36:32.921236   54797 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 15:36:32.921244   54797 cache.go:56] Caching tarball of preloaded images
	I0719 15:36:32.921323   54797 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:36:32.921336   54797 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 15:36:32.921448   54797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:36:32.921471   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json: {Name:mkbfe85e4041ec07d286899f03356ccb5c9d393b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:36:32.921617   54797 start.go:360] acquireMachinesLock for old-k8s-version-862924: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:37:13.748163   54797 start.go:364] duration metric: took 40.826518399s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:37:13.748231   54797 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:37:13.748357   54797 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 15:37:13.781118   54797 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 15:37:13.781320   54797 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:37:13.781365   54797 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:37:13.797526   54797 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0719 15:37:13.797965   54797 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:37:13.798558   54797 main.go:141] libmachine: Using API Version  1
	I0719 15:37:13.798586   54797 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:37:13.798996   54797 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:37:13.799216   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:37:13.799414   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:13.799583   54797 start.go:159] libmachine.API.Create for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:37:13.799613   54797 client.go:168] LocalClient.Create starting
	I0719 15:37:13.799648   54797 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 15:37:13.799692   54797 main.go:141] libmachine: Decoding PEM data...
	I0719 15:37:13.799721   54797 main.go:141] libmachine: Parsing certificate...
	I0719 15:37:13.799798   54797 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 15:37:13.799822   54797 main.go:141] libmachine: Decoding PEM data...
	I0719 15:37:13.799842   54797 main.go:141] libmachine: Parsing certificate...
	I0719 15:37:13.799870   54797 main.go:141] libmachine: Running pre-create checks...
	I0719 15:37:13.799879   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .PreCreateCheck
	I0719 15:37:13.800204   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:37:13.800670   54797 main.go:141] libmachine: Creating machine...
	I0719 15:37:13.800690   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .Create
	I0719 15:37:13.800834   54797 main.go:141] libmachine: (old-k8s-version-862924) Creating KVM machine...
	I0719 15:37:13.802094   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found existing default KVM network
	I0719 15:37:13.803298   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:13.803162   55211 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:af:01} reservation:<nil>}
	I0719 15:37:13.804361   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:13.804273   55211 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6720}
	I0719 15:37:13.804391   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | created network xml: 
	I0719 15:37:13.804404   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | <network>
	I0719 15:37:13.804419   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |   <name>mk-old-k8s-version-862924</name>
	I0719 15:37:13.804430   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |   <dns enable='no'/>
	I0719 15:37:13.804441   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |   
	I0719 15:37:13.804451   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0719 15:37:13.804460   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |     <dhcp>
	I0719 15:37:13.804468   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0719 15:37:13.804489   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |     </dhcp>
	I0719 15:37:13.804501   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |   </ip>
	I0719 15:37:13.804510   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG |   
	I0719 15:37:13.804518   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | </network>
	I0719 15:37:13.804526   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | 
	I0719 15:37:13.813291   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | trying to create private KVM network mk-old-k8s-version-862924 192.168.50.0/24...
	I0719 15:37:13.890473   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924 ...
	I0719 15:37:13.890510   54797 main.go:141] libmachine: (old-k8s-version-862924) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 15:37:13.890522   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | private KVM network mk-old-k8s-version-862924 192.168.50.0/24 created
	I0719 15:37:13.890540   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:13.889357   55211 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:37:13.890565   54797 main.go:141] libmachine: (old-k8s-version-862924) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 15:37:14.116598   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:14.116390   55211 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa...
	I0719 15:37:14.257089   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:14.256939   55211 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/old-k8s-version-862924.rawdisk...
	I0719 15:37:14.257134   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Writing magic tar header
	I0719 15:37:14.257153   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Writing SSH key tar header
	I0719 15:37:14.257168   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924 (perms=drwx------)
	I0719 15:37:14.257186   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 15:37:14.257203   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:14.257048   55211 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924 ...
	I0719 15:37:14.257225   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924
	I0719 15:37:14.257241   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 15:37:14.257256   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 15:37:14.257272   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 15:37:14.257283   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 15:37:14.257295   54797 main.go:141] libmachine: (old-k8s-version-862924) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 15:37:14.257304   54797 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:37:14.257336   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:37:14.257360   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 15:37:14.257372   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 15:37:14.257379   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home/jenkins
	I0719 15:37:14.257391   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Checking permissions on dir: /home
	I0719 15:37:14.257399   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Skipping /home - not owner
	I0719 15:37:14.258554   54797 main.go:141] libmachine: (old-k8s-version-862924) define libvirt domain using xml: 
	I0719 15:37:14.258584   54797 main.go:141] libmachine: (old-k8s-version-862924) <domain type='kvm'>
	I0719 15:37:14.258597   54797 main.go:141] libmachine: (old-k8s-version-862924)   <name>old-k8s-version-862924</name>
	I0719 15:37:14.258607   54797 main.go:141] libmachine: (old-k8s-version-862924)   <memory unit='MiB'>2200</memory>
	I0719 15:37:14.258623   54797 main.go:141] libmachine: (old-k8s-version-862924)   <vcpu>2</vcpu>
	I0719 15:37:14.258632   54797 main.go:141] libmachine: (old-k8s-version-862924)   <features>
	I0719 15:37:14.258661   54797 main.go:141] libmachine: (old-k8s-version-862924)     <acpi/>
	I0719 15:37:14.258680   54797 main.go:141] libmachine: (old-k8s-version-862924)     <apic/>
	I0719 15:37:14.258695   54797 main.go:141] libmachine: (old-k8s-version-862924)     <pae/>
	I0719 15:37:14.258713   54797 main.go:141] libmachine: (old-k8s-version-862924)     
	I0719 15:37:14.258726   54797 main.go:141] libmachine: (old-k8s-version-862924)   </features>
	I0719 15:37:14.258739   54797 main.go:141] libmachine: (old-k8s-version-862924)   <cpu mode='host-passthrough'>
	I0719 15:37:14.258752   54797 main.go:141] libmachine: (old-k8s-version-862924)   
	I0719 15:37:14.258764   54797 main.go:141] libmachine: (old-k8s-version-862924)   </cpu>
	I0719 15:37:14.258777   54797 main.go:141] libmachine: (old-k8s-version-862924)   <os>
	I0719 15:37:14.258789   54797 main.go:141] libmachine: (old-k8s-version-862924)     <type>hvm</type>
	I0719 15:37:14.258814   54797 main.go:141] libmachine: (old-k8s-version-862924)     <boot dev='cdrom'/>
	I0719 15:37:14.258835   54797 main.go:141] libmachine: (old-k8s-version-862924)     <boot dev='hd'/>
	I0719 15:37:14.258848   54797 main.go:141] libmachine: (old-k8s-version-862924)     <bootmenu enable='no'/>
	I0719 15:37:14.258859   54797 main.go:141] libmachine: (old-k8s-version-862924)   </os>
	I0719 15:37:14.258869   54797 main.go:141] libmachine: (old-k8s-version-862924)   <devices>
	I0719 15:37:14.258879   54797 main.go:141] libmachine: (old-k8s-version-862924)     <disk type='file' device='cdrom'>
	I0719 15:37:14.258895   54797 main.go:141] libmachine: (old-k8s-version-862924)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/boot2docker.iso'/>
	I0719 15:37:14.258907   54797 main.go:141] libmachine: (old-k8s-version-862924)       <target dev='hdc' bus='scsi'/>
	I0719 15:37:14.258919   54797 main.go:141] libmachine: (old-k8s-version-862924)       <readonly/>
	I0719 15:37:14.258930   54797 main.go:141] libmachine: (old-k8s-version-862924)     </disk>
	I0719 15:37:14.258956   54797 main.go:141] libmachine: (old-k8s-version-862924)     <disk type='file' device='disk'>
	I0719 15:37:14.258984   54797 main.go:141] libmachine: (old-k8s-version-862924)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 15:37:14.259003   54797 main.go:141] libmachine: (old-k8s-version-862924)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/old-k8s-version-862924.rawdisk'/>
	I0719 15:37:14.259027   54797 main.go:141] libmachine: (old-k8s-version-862924)       <target dev='hda' bus='virtio'/>
	I0719 15:37:14.259042   54797 main.go:141] libmachine: (old-k8s-version-862924)     </disk>
	I0719 15:37:14.259055   54797 main.go:141] libmachine: (old-k8s-version-862924)     <interface type='network'>
	I0719 15:37:14.259067   54797 main.go:141] libmachine: (old-k8s-version-862924)       <source network='mk-old-k8s-version-862924'/>
	I0719 15:37:14.259080   54797 main.go:141] libmachine: (old-k8s-version-862924)       <model type='virtio'/>
	I0719 15:37:14.259093   54797 main.go:141] libmachine: (old-k8s-version-862924)     </interface>
	I0719 15:37:14.259106   54797 main.go:141] libmachine: (old-k8s-version-862924)     <interface type='network'>
	I0719 15:37:14.259120   54797 main.go:141] libmachine: (old-k8s-version-862924)       <source network='default'/>
	I0719 15:37:14.259131   54797 main.go:141] libmachine: (old-k8s-version-862924)       <model type='virtio'/>
	I0719 15:37:14.259145   54797 main.go:141] libmachine: (old-k8s-version-862924)     </interface>
	I0719 15:37:14.259154   54797 main.go:141] libmachine: (old-k8s-version-862924)     <serial type='pty'>
	I0719 15:37:14.259167   54797 main.go:141] libmachine: (old-k8s-version-862924)       <target port='0'/>
	I0719 15:37:14.259179   54797 main.go:141] libmachine: (old-k8s-version-862924)     </serial>
	I0719 15:37:14.259193   54797 main.go:141] libmachine: (old-k8s-version-862924)     <console type='pty'>
	I0719 15:37:14.259207   54797 main.go:141] libmachine: (old-k8s-version-862924)       <target type='serial' port='0'/>
	I0719 15:37:14.259220   54797 main.go:141] libmachine: (old-k8s-version-862924)     </console>
	I0719 15:37:14.259232   54797 main.go:141] libmachine: (old-k8s-version-862924)     <rng model='virtio'>
	I0719 15:37:14.259243   54797 main.go:141] libmachine: (old-k8s-version-862924)       <backend model='random'>/dev/random</backend>
	I0719 15:37:14.259255   54797 main.go:141] libmachine: (old-k8s-version-862924)     </rng>
	I0719 15:37:14.259268   54797 main.go:141] libmachine: (old-k8s-version-862924)     
	I0719 15:37:14.259280   54797 main.go:141] libmachine: (old-k8s-version-862924)     
	I0719 15:37:14.259293   54797 main.go:141] libmachine: (old-k8s-version-862924)   </devices>
	I0719 15:37:14.259305   54797 main.go:141] libmachine: (old-k8s-version-862924) </domain>
	I0719 15:37:14.259322   54797 main.go:141] libmachine: (old-k8s-version-862924) 
	I0719 15:37:14.265742   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:be:5e:06 in network default
	I0719 15:37:14.266353   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:14.266375   54797 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:37:14.267149   54797 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:37:14.267519   54797 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:37:14.268140   54797 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:37:14.269010   54797 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:37:15.578725   54797 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:37:15.579540   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:15.580031   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:15.580086   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:15.580019   55211 retry.go:31] will retry after 192.294822ms: waiting for machine to come up
	I0719 15:37:15.774446   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:15.775070   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:15.775101   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:15.775028   55211 retry.go:31] will retry after 368.310018ms: waiting for machine to come up
	I0719 15:37:16.144652   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:16.145233   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:16.145350   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:16.145314   55211 retry.go:31] will retry after 484.204097ms: waiting for machine to come up
	I0719 15:37:16.631027   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:16.631662   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:16.631687   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:16.631600   55211 retry.go:31] will retry after 473.071812ms: waiting for machine to come up
	I0719 15:37:17.106007   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:17.106551   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:17.106573   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:17.106515   55211 retry.go:31] will retry after 628.895845ms: waiting for machine to come up
	I0719 15:37:17.737501   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:17.738027   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:17.738054   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:17.737995   55211 retry.go:31] will retry after 952.256957ms: waiting for machine to come up
	I0719 15:37:18.692041   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:18.692482   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:18.692509   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:18.692436   55211 retry.go:31] will retry after 810.175641ms: waiting for machine to come up
	I0719 15:37:19.503890   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:19.504395   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:19.504422   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:19.504344   55211 retry.go:31] will retry after 1.250767778s: waiting for machine to come up
	I0719 15:37:20.756551   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:20.757091   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:20.757120   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:20.757043   55211 retry.go:31] will retry after 1.317826779s: waiting for machine to come up
	I0719 15:37:22.076577   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:22.077080   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:22.077114   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:22.077040   55211 retry.go:31] will retry after 2.165024824s: waiting for machine to come up
	I0719 15:37:24.243556   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:24.244181   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:24.244211   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:24.244123   55211 retry.go:31] will retry after 1.942652305s: waiting for machine to come up
	I0719 15:37:26.560667   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:26.561409   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:26.561441   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:26.561361   55211 retry.go:31] will retry after 2.724602942s: waiting for machine to come up
	I0719 15:37:29.287883   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:29.288430   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:29.288478   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:29.288401   55211 retry.go:31] will retry after 4.518333411s: waiting for machine to come up
	I0719 15:37:33.808527   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:33.809163   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:37:33.809197   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:37:33.809144   55211 retry.go:31] will retry after 4.012052096s: waiting for machine to come up
	I0719 15:37:37.825983   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:37.826490   54797 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:37:37.826513   54797 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:37:37.826527   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:37.826888   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924
	I0719 15:37:37.898900   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:37:37.898929   54797 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:37:37.898949   54797 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:37:37.901316   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:37.901711   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:37.901738   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:37.901860   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:37:37.901894   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:37:37.901930   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:37:37.901940   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:37:37.901952   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:37:38.030276   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:37:38.030499   54797 main.go:141] libmachine: (old-k8s-version-862924) KVM machine creation complete!
	I0719 15:37:38.030808   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:37:38.031375   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:38.031598   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:38.031762   54797 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 15:37:38.031774   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:37:38.033004   54797 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 15:37:38.033015   54797 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 15:37:38.033020   54797 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 15:37:38.033027   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.035221   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.035507   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.035530   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.035685   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:38.035844   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.035982   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.036080   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:38.036239   54797 main.go:141] libmachine: Using SSH client type: native
	I0719 15:37:38.036408   54797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:37:38.036419   54797 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 15:37:38.137769   54797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:37:38.137796   54797 main.go:141] libmachine: Detecting the provisioner...
	I0719 15:37:38.137804   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.140490   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.140759   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.140799   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.140989   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:38.141179   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.141370   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.141510   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:38.141655   54797 main.go:141] libmachine: Using SSH client type: native
	I0719 15:37:38.141835   54797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:37:38.141847   54797 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 15:37:38.246846   54797 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 15:37:38.246919   54797 main.go:141] libmachine: found compatible host: buildroot
	I0719 15:37:38.246928   54797 main.go:141] libmachine: Provisioning with buildroot...
	I0719 15:37:38.246940   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:37:38.247166   54797 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:37:38.247187   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:37:38.247349   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.249561   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.249896   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.249916   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.250074   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:38.250231   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.250394   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.250540   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:38.250696   54797 main.go:141] libmachine: Using SSH client type: native
	I0719 15:37:38.250852   54797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:37:38.250865   54797 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:37:38.370825   54797 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:37:38.370851   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.373331   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.373598   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.373624   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.373779   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:38.373978   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.374107   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.374265   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:38.374425   54797 main.go:141] libmachine: Using SSH client type: native
	I0719 15:37:38.374621   54797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:37:38.374644   54797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:37:38.491142   54797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:37:38.491165   54797 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:37:38.491182   54797 buildroot.go:174] setting up certificates
	I0719 15:37:38.491193   54797 provision.go:84] configureAuth start
	I0719 15:37:38.491203   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:37:38.491518   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:37:38.493856   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.494144   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.494174   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.494293   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.496060   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.496340   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.496366   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.496499   54797 provision.go:143] copyHostCerts
	I0719 15:37:38.496562   54797 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:37:38.496575   54797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:37:38.496632   54797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:37:38.496751   54797 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:37:38.496761   54797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:37:38.496792   54797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:37:38.496891   54797 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:37:38.496900   54797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:37:38.496930   54797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:37:38.496999   54797 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:37:38.630632   54797 provision.go:177] copyRemoteCerts
	I0719 15:37:38.630698   54797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:37:38.630733   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.633100   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.633371   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.633412   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.633607   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:38.633789   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.633931   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:38.634048   54797 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:37:38.717037   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:37:38.741827   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:37:38.764762   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:37:38.786833   54797 provision.go:87] duration metric: took 295.627287ms to configureAuth
	I0719 15:37:38.786862   54797 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:37:38.787036   54797 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:37:38.787113   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:38.789680   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.790104   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:38.790126   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:38.790307   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:38.790463   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.790603   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:38.790724   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:38.790854   54797 main.go:141] libmachine: Using SSH client type: native
	I0719 15:37:38.791010   54797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:37:38.791026   54797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:37:39.052738   54797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:37:39.052764   54797 main.go:141] libmachine: Checking connection to Docker...
	I0719 15:37:39.052772   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetURL
	I0719 15:37:39.054348   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using libvirt version 6000000
	I0719 15:37:39.056468   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.056866   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.056888   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.057017   54797 main.go:141] libmachine: Docker is up and running!
	I0719 15:37:39.057033   54797 main.go:141] libmachine: Reticulating splines...
	I0719 15:37:39.057041   54797 client.go:171] duration metric: took 25.257418339s to LocalClient.Create
	I0719 15:37:39.057065   54797 start.go:167] duration metric: took 25.2574837s to libmachine.API.Create "old-k8s-version-862924"
	I0719 15:37:39.057076   54797 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:37:39.057085   54797 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:37:39.057097   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:39.057287   54797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:37:39.057312   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:39.059494   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.059803   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.059826   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.059938   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:39.060118   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:39.060292   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:39.060421   54797 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:37:39.140251   54797 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:37:39.144438   54797 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:37:39.144458   54797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:37:39.144514   54797 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:37:39.144617   54797 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:37:39.144720   54797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:37:39.154011   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:37:39.177205   54797 start.go:296] duration metric: took 120.118513ms for postStartSetup
	I0719 15:37:39.177246   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:37:39.177871   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:37:39.180210   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.180613   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.180643   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.181005   54797 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:37:39.181211   54797 start.go:128] duration metric: took 25.432842164s to createHost
	I0719 15:37:39.181235   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:39.183411   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.183718   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.183753   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.183854   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:39.184026   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:39.184217   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:39.184417   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:39.184627   54797 main.go:141] libmachine: Using SSH client type: native
	I0719 15:37:39.184812   54797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:37:39.184824   54797 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 15:37:39.286897   54797 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721403459.241554704
	
	I0719 15:37:39.286918   54797 fix.go:216] guest clock: 1721403459.241554704
	I0719 15:37:39.286925   54797 fix.go:229] Guest: 2024-07-19 15:37:39.241554704 +0000 UTC Remote: 2024-07-19 15:37:39.181224271 +0000 UTC m=+66.366310797 (delta=60.330433ms)
	I0719 15:37:39.286960   54797 fix.go:200] guest clock delta is within tolerance: 60.330433ms
	I0719 15:37:39.286967   54797 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 25.538768626s
	I0719 15:37:39.287001   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:39.287281   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:37:39.290040   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.290520   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.290606   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.290699   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:39.291210   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:39.291423   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:37:39.291530   54797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:37:39.291636   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:39.291644   54797 ssh_runner.go:195] Run: cat /version.json
	I0719 15:37:39.291718   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:37:39.294546   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.294798   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.294850   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.294877   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.295007   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:39.295210   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:39.295229   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:39.295250   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:39.295369   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:39.295424   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:37:39.295551   54797 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:37:39.295565   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:37:39.295732   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:37:39.295866   54797 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:37:39.400150   54797 ssh_runner.go:195] Run: systemctl --version
	I0719 15:37:39.409396   54797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:37:39.577041   54797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:37:39.583394   54797 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:37:39.583469   54797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:37:39.600693   54797 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:37:39.600716   54797 start.go:495] detecting cgroup driver to use...
	I0719 15:37:39.600793   54797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:37:39.616561   54797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:37:39.630139   54797 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:37:39.630184   54797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:37:39.643168   54797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:37:39.657206   54797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:37:39.777706   54797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:37:39.931619   54797 docker.go:233] disabling docker service ...
	I0719 15:37:39.931716   54797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:37:39.950430   54797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:37:39.963984   54797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:37:40.101908   54797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:37:40.239747   54797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:37:40.255651   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:37:40.274918   54797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:37:40.274979   54797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:37:40.285627   54797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:37:40.285703   54797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:37:40.297574   54797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:37:40.308579   54797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:37:40.319491   54797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:37:40.330280   54797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:37:40.339553   54797 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:37:40.339590   54797 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:37:40.352421   54797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:37:40.361784   54797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:37:40.496066   54797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:37:40.636161   54797 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:37:40.636223   54797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:37:40.640975   54797 start.go:563] Will wait 60s for crictl version
	I0719 15:37:40.641025   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:40.644705   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:37:40.688005   54797 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:37:40.688085   54797 ssh_runner.go:195] Run: crio --version
	I0719 15:37:40.721092   54797 ssh_runner.go:195] Run: crio --version
	I0719 15:37:40.751891   54797 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:37:40.753272   54797 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:37:40.759037   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:40.759736   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:37:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:37:40.759765   54797 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:37:40.759983   54797 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:37:40.764352   54797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:37:40.778216   54797 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:37:40.778374   54797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:37:40.778432   54797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:37:40.821884   54797 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:37:40.821955   54797 ssh_runner.go:195] Run: which lz4
	I0719 15:37:40.825981   54797 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 15:37:40.830529   54797 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:37:40.830557   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:37:42.534212   54797 crio.go:462] duration metric: took 1.70825306s to copy over tarball
	I0719 15:37:42.534309   54797 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:37:45.104430   54797 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.570080669s)
	I0719 15:37:45.104493   54797 crio.go:469] duration metric: took 2.570209989s to extract the tarball
	I0719 15:37:45.104507   54797 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:37:45.149217   54797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:37:45.196502   54797 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:37:45.196535   54797 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:37:45.196617   54797 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:37:45.196639   54797 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:37:45.196648   54797 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:37:45.196696   54797 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:37:45.196728   54797 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:37:45.196741   54797 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:37:45.196765   54797 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:37:45.196618   54797 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:37:45.197835   54797 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:37:45.198053   54797 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:37:45.198072   54797 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:37:45.198053   54797 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:37:45.198103   54797 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:37:45.198114   54797 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:37:45.198094   54797 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:37:45.198137   54797 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:37:45.354649   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:37:45.365730   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:37:45.365979   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:37:45.369152   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:37:45.375887   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:37:45.389054   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:37:45.415088   54797 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:37:45.415153   54797 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:37:45.415203   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.428900   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:37:45.539387   54797 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:37:45.539430   54797 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:37:45.539428   54797 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:37:45.539458   54797 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:37:45.539476   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.539395   54797 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:37:45.539533   54797 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:37:45.539495   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.539595   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.556522   54797 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:37:45.556557   54797 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:37:45.556603   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.556610   54797 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:37:45.556637   54797 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:37:45.556655   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:37:45.556670   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.556686   54797 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:37:45.556705   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:37:45.556715   54797 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:37:45.556725   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:37:45.556744   54797 ssh_runner.go:195] Run: which crictl
	I0719 15:37:45.556756   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:37:45.613221   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:37:45.613397   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:37:45.657496   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:37:45.657557   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:37:45.667075   54797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:37:45.667111   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:37:45.667155   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:37:45.698784   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:37:45.723573   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:37:45.730335   54797 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:37:46.151168   54797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:37:46.293910   54797 cache_images.go:92] duration metric: took 1.09735523s to LoadCachedImages
	W0719 15:37:46.294011   54797 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0719 15:37:46.294073   54797 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:37:46.294223   54797 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:37:46.294311   54797 ssh_runner.go:195] Run: crio config
	I0719 15:37:46.347232   54797 cni.go:84] Creating CNI manager for ""
	I0719 15:37:46.347251   54797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:37:46.347260   54797 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:37:46.347283   54797 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:37:46.347442   54797 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:37:46.347498   54797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:37:46.358056   54797 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:37:46.358119   54797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:37:46.367415   54797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:37:46.385125   54797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:37:46.402083   54797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:37:46.420089   54797 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:37:46.423976   54797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:37:46.435949   54797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:37:46.569321   54797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:37:46.587647   54797 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:37:46.587674   54797 certs.go:194] generating shared ca certs ...
	I0719 15:37:46.587696   54797 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:46.587873   54797 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:37:46.587969   54797 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:37:46.587987   54797 certs.go:256] generating profile certs ...
	I0719 15:37:46.588066   54797 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:37:46.588085   54797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.crt with IP's: []
	I0719 15:37:46.734943   54797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.crt ...
	I0719 15:37:46.734979   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.crt: {Name:mk2333d319649b763100d8b4718a57b0a993aece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:46.735267   54797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key ...
	I0719 15:37:46.735289   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key: {Name:mk764ad25efa00784a2daf4aae8df27597c74a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:46.735392   54797 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:37:46.735412   54797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt.4659f1b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.102]
	I0719 15:37:47.020106   54797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt.4659f1b2 ...
	I0719 15:37:47.020135   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt.4659f1b2: {Name:mk76f46f96b32b3812d4e110dc9a6de135b220f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:47.020311   54797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2 ...
	I0719 15:37:47.020326   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2: {Name:mk82c2e840a665f1bb5b07961a88720803183a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:47.020420   54797 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt.4659f1b2 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt
	I0719 15:37:47.067693   54797 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key
	I0719 15:37:47.067821   54797 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:37:47.067845   54797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt with IP's: []
	I0719 15:37:47.154176   54797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt ...
	I0719 15:37:47.154203   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt: {Name:mkc3426e5e6eb5344951b8ff4778f04f6f855ece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:47.213986   54797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key ...
	I0719 15:37:47.214025   54797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key: {Name:mkfa6574902be71225f700eeff2caed32ab42756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:37:47.214299   54797 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:37:47.214350   54797 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:37:47.214368   54797 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:37:47.214395   54797 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:37:47.214420   54797 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:37:47.214447   54797 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:37:47.214498   54797 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:37:47.215312   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:37:47.242636   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:37:47.266008   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:37:47.292280   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:37:47.315369   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:37:47.338646   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:37:47.363960   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:37:47.388863   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:37:47.412736   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:37:47.444991   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:37:47.474661   54797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:37:47.515314   54797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:37:47.544618   54797 ssh_runner.go:195] Run: openssl version
	I0719 15:37:47.551568   54797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:37:47.562665   54797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:37:47.567042   54797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:37:47.567089   54797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:37:47.572800   54797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:37:47.583621   54797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:37:47.595007   54797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:37:47.600061   54797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:37:47.600107   54797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:37:47.606428   54797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:37:47.619068   54797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:37:47.632177   54797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:37:47.636459   54797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:37:47.636514   54797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:37:47.642292   54797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:37:47.655430   54797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:37:47.659480   54797 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 15:37:47.659534   54797 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:37:47.659620   54797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:37:47.659660   54797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:37:47.703463   54797 cri.go:89] found id: ""
	I0719 15:37:47.703531   54797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:37:47.714420   54797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:37:47.725207   54797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:37:47.736419   54797 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:37:47.736439   54797 kubeadm.go:157] found existing configuration files:
	
	I0719 15:37:47.736482   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:37:47.746985   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:37:47.747048   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:37:47.758031   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:37:47.767219   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:37:47.767285   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:37:47.776964   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:37:47.787863   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:37:47.787913   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:37:47.798171   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:37:47.808633   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:37:47.808692   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:37:47.819577   54797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:37:47.945430   54797 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:37:47.945568   54797 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:37:48.094512   54797 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:37:48.094661   54797 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:37:48.094782   54797 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:37:48.282319   54797 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:37:48.284113   54797 out.go:204]   - Generating certificates and keys ...
	I0719 15:37:48.284207   54797 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:37:48.284295   54797 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:37:48.476999   54797 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:37:48.816646   54797 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:37:48.932537   54797 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:37:49.305508   54797 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 15:37:49.604353   54797 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 15:37:49.604906   54797 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-862924] and IPs [192.168.50.102 127.0.0.1 ::1]
	I0719 15:37:49.769780   54797 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 15:37:49.770117   54797 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-862924] and IPs [192.168.50.102 127.0.0.1 ::1]
	I0719 15:37:49.879126   54797 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:37:49.966067   54797 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:37:50.480315   54797 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 15:37:50.480444   54797 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:37:50.667477   54797 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:37:50.854868   54797 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:37:50.991705   54797 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:37:51.107595   54797 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:37:51.137918   54797 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:37:51.139057   54797 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:37:51.139144   54797 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:37:51.299623   54797 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:37:51.301558   54797 out.go:204]   - Booting up control plane ...
	I0719 15:37:51.301686   54797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:37:51.306370   54797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:37:51.308170   54797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:37:51.309284   54797 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:37:51.315569   54797 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:38:31.276271   54797 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:38:31.277052   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:38:31.277313   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:38:36.277218   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:38:36.277508   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:38:46.275980   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:38:46.276248   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:39:06.275821   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:39:06.276064   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:39:46.274446   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:39:46.274709   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:39:46.274754   54797 kubeadm.go:310] 
	I0719 15:39:46.274821   54797 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:39:46.274899   54797 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:39:46.274916   54797 kubeadm.go:310] 
	I0719 15:39:46.274964   54797 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:39:46.275012   54797 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:39:46.275185   54797 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:39:46.275201   54797 kubeadm.go:310] 
	I0719 15:39:46.275322   54797 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:39:46.275366   54797 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:39:46.275412   54797 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:39:46.275427   54797 kubeadm.go:310] 
	I0719 15:39:46.275586   54797 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:39:46.275723   54797 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:39:46.275741   54797 kubeadm.go:310] 
	I0719 15:39:46.275883   54797 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:39:46.276048   54797 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:39:46.276140   54797 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:39:46.276257   54797 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:39:46.276276   54797 kubeadm.go:310] 
	I0719 15:39:46.277100   54797 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:39:46.277209   54797 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:39:46.277313   54797 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:39:46.277424   54797 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-862924] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-862924] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-862924] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-862924] and IPs [192.168.50.102 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:39:46.277468   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:39:46.962144   54797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:39:46.977046   54797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:39:46.986779   54797 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:39:46.986797   54797 kubeadm.go:157] found existing configuration files:
	
	I0719 15:39:46.986837   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:39:46.998580   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:39:46.998636   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:39:47.010297   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:39:47.020509   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:39:47.020566   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:39:47.030165   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:39:47.039836   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:39:47.039878   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:39:47.049437   54797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:39:47.059241   54797 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:39:47.059287   54797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:39:47.069047   54797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:39:47.302401   54797 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:41:43.814887   54797 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:41:43.814996   54797 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:41:43.816582   54797 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:41:43.816651   54797 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:41:43.816750   54797 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:41:43.816863   54797 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:41:43.816989   54797 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:41:43.817083   54797 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:41:43.818905   54797 out.go:204]   - Generating certificates and keys ...
	I0719 15:41:43.818987   54797 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:41:43.819057   54797 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:41:43.819151   54797 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:41:43.819234   54797 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:41:43.819322   54797 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:41:43.819386   54797 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:41:43.819445   54797 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:41:43.819498   54797 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:41:43.819554   54797 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:41:43.819617   54797 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:41:43.819671   54797 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:41:43.819746   54797 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:41:43.819813   54797 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:41:43.819885   54797 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:41:43.819946   54797 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:41:43.819997   54797 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:41:43.820082   54797 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:41:43.820169   54797 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:41:43.820218   54797 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:41:43.820317   54797 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:41:43.822541   54797 out.go:204]   - Booting up control plane ...
	I0719 15:41:43.822634   54797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:41:43.822737   54797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:41:43.822836   54797 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:41:43.822957   54797 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:41:43.823144   54797 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:41:43.823211   54797 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:41:43.823314   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:41:43.823562   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:41:43.823657   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:41:43.823840   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:41:43.823937   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:41:43.824113   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:41:43.824199   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:41:43.824440   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:41:43.824509   54797 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:41:43.824685   54797 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:41:43.824701   54797 kubeadm.go:310] 
	I0719 15:41:43.824735   54797 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:41:43.824769   54797 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:41:43.824775   54797 kubeadm.go:310] 
	I0719 15:41:43.824809   54797 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:41:43.824838   54797 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:41:43.824923   54797 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:41:43.824929   54797 kubeadm.go:310] 
	I0719 15:41:43.825015   54797 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:41:43.825043   54797 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:41:43.825070   54797 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:41:43.825076   54797 kubeadm.go:310] 
	I0719 15:41:43.825175   54797 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:41:43.825277   54797 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:41:43.825286   54797 kubeadm.go:310] 
	I0719 15:41:43.825418   54797 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:41:43.825520   54797 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:41:43.825632   54797 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:41:43.825730   54797 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:41:43.825780   54797 kubeadm.go:310] 
	I0719 15:41:43.825809   54797 kubeadm.go:394] duration metric: took 3m56.166279101s to StartCluster
	I0719 15:41:43.825877   54797 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:41:43.825949   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:41:43.867914   54797 cri.go:89] found id: ""
	I0719 15:41:43.867944   54797 logs.go:276] 0 containers: []
	W0719 15:41:43.867956   54797 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:41:43.867964   54797 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:41:43.868022   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:41:43.903489   54797 cri.go:89] found id: ""
	I0719 15:41:43.903512   54797 logs.go:276] 0 containers: []
	W0719 15:41:43.903521   54797 logs.go:278] No container was found matching "etcd"
	I0719 15:41:43.903528   54797 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:41:43.903583   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:41:43.938329   54797 cri.go:89] found id: ""
	I0719 15:41:43.938359   54797 logs.go:276] 0 containers: []
	W0719 15:41:43.938369   54797 logs.go:278] No container was found matching "coredns"
	I0719 15:41:43.938374   54797 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:41:43.938425   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:41:43.971720   54797 cri.go:89] found id: ""
	I0719 15:41:43.971744   54797 logs.go:276] 0 containers: []
	W0719 15:41:43.971752   54797 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:41:43.971758   54797 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:41:43.971814   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:41:44.005691   54797 cri.go:89] found id: ""
	I0719 15:41:44.005716   54797 logs.go:276] 0 containers: []
	W0719 15:41:44.005724   54797 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:41:44.005729   54797 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:41:44.005778   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:41:44.044569   54797 cri.go:89] found id: ""
	I0719 15:41:44.044593   54797 logs.go:276] 0 containers: []
	W0719 15:41:44.044603   54797 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:41:44.044609   54797 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:41:44.044655   54797 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:41:44.077615   54797 cri.go:89] found id: ""
	I0719 15:41:44.077642   54797 logs.go:276] 0 containers: []
	W0719 15:41:44.077652   54797 logs.go:278] No container was found matching "kindnet"
	I0719 15:41:44.077670   54797 logs.go:123] Gathering logs for kubelet ...
	I0719 15:41:44.077684   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:41:44.131910   54797 logs.go:123] Gathering logs for dmesg ...
	I0719 15:41:44.131940   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:41:44.146111   54797 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:41:44.146142   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:41:44.288867   54797 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:41:44.288889   54797 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:41:44.288903   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:41:44.382824   54797 logs.go:123] Gathering logs for container status ...
	I0719 15:41:44.382857   54797 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:41:44.425473   54797 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:41:44.425514   54797 out.go:239] * 
	* 
	W0719 15:41:44.425564   54797 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:41:44.425586   54797 out.go:239] * 
	* 
	W0719 15:41:44.426447   54797 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:41:44.429199   54797 out.go:177] 
	W0719 15:41:44.430381   54797 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:41:44.430442   54797 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:41:44.430470   54797 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:41:44.432120   54797 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-862924 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 6 (215.162473ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:41:44.694568   57812 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862924" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (311.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-817144 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-817144 --alsologtostderr -v=3: exit status 82 (2m0.549059132s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-817144"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:39:52.958972   56926 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:39:52.959093   56926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:39:52.959102   56926 out.go:304] Setting ErrFile to fd 2...
	I0719 15:39:52.959107   56926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:39:52.959297   56926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:39:52.959542   56926 out.go:298] Setting JSON to false
	I0719 15:39:52.959624   56926 mustload.go:65] Loading cluster: embed-certs-817144
	I0719 15:39:52.959983   56926 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:39:52.960059   56926 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:39:52.960249   56926 mustload.go:65] Loading cluster: embed-certs-817144
	I0719 15:39:52.960379   56926 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:39:52.960407   56926 stop.go:39] StopHost: embed-certs-817144
	I0719 15:39:52.960792   56926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:39:52.960830   56926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:39:52.978086   56926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0719 15:39:52.978636   56926 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:39:52.979228   56926 main.go:141] libmachine: Using API Version  1
	I0719 15:39:52.979254   56926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:39:52.979653   56926 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:39:52.981503   56926 out.go:177] * Stopping node "embed-certs-817144"  ...
	I0719 15:39:52.982945   56926 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 15:39:52.982983   56926 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:39:52.983235   56926 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 15:39:52.983272   56926 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:39:52.986443   56926 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:39:52.986908   56926 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:38:19 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:39:52.986948   56926 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:39:52.987044   56926 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:39:52.987275   56926 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:39:52.987451   56926 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:39:52.987654   56926 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:39:53.125526   56926 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 15:39:53.187670   56926 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 15:39:53.259563   56926 main.go:141] libmachine: Stopping "embed-certs-817144"...
	I0719 15:39:53.259586   56926 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:39:53.261589   56926 main.go:141] libmachine: (embed-certs-817144) Calling .Stop
	I0719 15:39:53.265624   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 0/120
	I0719 15:39:54.267270   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 1/120
	I0719 15:39:55.269443   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 2/120
	I0719 15:39:56.270842   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 3/120
	I0719 15:39:57.272230   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 4/120
	I0719 15:39:58.274745   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 5/120
	I0719 15:39:59.276184   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 6/120
	I0719 15:40:00.277654   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 7/120
	I0719 15:40:01.278929   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 8/120
	I0719 15:40:02.280798   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 9/120
	I0719 15:40:03.282911   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 10/120
	I0719 15:40:04.285019   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 11/120
	I0719 15:40:05.286617   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 12/120
	I0719 15:40:06.288296   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 13/120
	I0719 15:40:07.289576   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 14/120
	I0719 15:40:08.291375   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 15/120
	I0719 15:40:09.293946   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 16/120
	I0719 15:40:10.295290   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 17/120
	I0719 15:40:11.296748   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 18/120
	I0719 15:40:12.298420   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 19/120
	I0719 15:40:13.300717   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 20/120
	I0719 15:40:14.302223   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 21/120
	I0719 15:40:15.303631   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 22/120
	I0719 15:40:16.304978   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 23/120
	I0719 15:40:17.306342   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 24/120
	I0719 15:40:18.308184   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 25/120
	I0719 15:40:19.309702   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 26/120
	I0719 15:40:20.311256   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 27/120
	I0719 15:40:21.312623   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 28/120
	I0719 15:40:22.314039   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 29/120
	I0719 15:40:23.316080   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 30/120
	I0719 15:40:24.317378   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 31/120
	I0719 15:40:25.319528   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 32/120
	I0719 15:40:26.321114   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 33/120
	I0719 15:40:27.322709   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 34/120
	I0719 15:40:28.324573   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 35/120
	I0719 15:40:29.325935   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 36/120
	I0719 15:40:30.327737   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 37/120
	I0719 15:40:31.329090   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 38/120
	I0719 15:40:32.330228   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 39/120
	I0719 15:40:33.332334   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 40/120
	I0719 15:40:34.333580   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 41/120
	I0719 15:40:35.335189   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 42/120
	I0719 15:40:36.336498   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 43/120
	I0719 15:40:37.337918   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 44/120
	I0719 15:40:38.339917   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 45/120
	I0719 15:40:39.341167   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 46/120
	I0719 15:40:40.342595   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 47/120
	I0719 15:40:41.345036   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 48/120
	I0719 15:40:42.346639   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 49/120
	I0719 15:40:43.348767   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 50/120
	I0719 15:40:44.350147   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 51/120
	I0719 15:40:45.351424   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 52/120
	I0719 15:40:46.352808   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 53/120
	I0719 15:40:47.354012   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 54/120
	I0719 15:40:48.355572   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 55/120
	I0719 15:40:49.356722   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 56/120
	I0719 15:40:50.357912   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 57/120
	I0719 15:40:51.359230   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 58/120
	I0719 15:40:52.360565   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 59/120
	I0719 15:40:53.362521   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 60/120
	I0719 15:40:54.363886   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 61/120
	I0719 15:40:55.365101   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 62/120
	I0719 15:40:56.366609   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 63/120
	I0719 15:40:57.368593   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 64/120
	I0719 15:40:58.370508   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 65/120
	I0719 15:40:59.372650   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 66/120
	I0719 15:41:00.374525   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 67/120
	I0719 15:41:01.376767   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 68/120
	I0719 15:41:02.377993   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 69/120
	I0719 15:41:03.380200   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 70/120
	I0719 15:41:04.382456   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 71/120
	I0719 15:41:05.383838   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 72/120
	I0719 15:41:06.385155   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 73/120
	I0719 15:41:07.386961   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 74/120
	I0719 15:41:08.388580   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 75/120
	I0719 15:41:09.390031   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 76/120
	I0719 15:41:10.391410   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 77/120
	I0719 15:41:11.392738   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 78/120
	I0719 15:41:12.394025   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 79/120
	I0719 15:41:13.396118   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 80/120
	I0719 15:41:14.397537   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 81/120
	I0719 15:41:15.398791   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 82/120
	I0719 15:41:16.399952   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 83/120
	I0719 15:41:17.401579   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 84/120
	I0719 15:41:18.403307   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 85/120
	I0719 15:41:19.404625   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 86/120
	I0719 15:41:20.405724   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 87/120
	I0719 15:41:21.406993   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 88/120
	I0719 15:41:22.408626   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 89/120
	I0719 15:41:23.410799   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 90/120
	I0719 15:41:24.413124   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 91/120
	I0719 15:41:25.414468   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 92/120
	I0719 15:41:26.416906   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 93/120
	I0719 15:41:27.418249   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 94/120
	I0719 15:41:28.419553   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 95/120
	I0719 15:41:29.420883   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 96/120
	I0719 15:41:30.422266   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 97/120
	I0719 15:41:31.423474   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 98/120
	I0719 15:41:32.424629   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 99/120
	I0719 15:41:33.426606   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 100/120
	I0719 15:41:34.428725   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 101/120
	I0719 15:41:35.429965   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 102/120
	I0719 15:41:36.431235   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 103/120
	I0719 15:41:37.432662   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 104/120
	I0719 15:41:38.434533   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 105/120
	I0719 15:41:39.435834   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 106/120
	I0719 15:41:40.436943   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 107/120
	I0719 15:41:41.438249   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 108/120
	I0719 15:41:42.439304   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 109/120
	I0719 15:41:43.441452   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 110/120
	I0719 15:41:44.442785   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 111/120
	I0719 15:41:45.444489   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 112/120
	I0719 15:41:46.446010   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 113/120
	I0719 15:41:47.447423   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 114/120
	I0719 15:41:48.449385   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 115/120
	I0719 15:41:49.450988   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 116/120
	I0719 15:41:50.452231   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 117/120
	I0719 15:41:51.453557   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 118/120
	I0719 15:41:52.455016   56926 main.go:141] libmachine: (embed-certs-817144) Waiting for machine to stop 119/120
	I0719 15:41:53.455868   56926 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 15:41:53.455939   56926 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 15:41:53.457621   56926 out.go:177] 
	W0719 15:41:53.458970   56926 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 15:41:53.458990   56926 out.go:239] * 
	* 
	W0719 15:41:53.461456   56926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:41:53.462692   56926 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-817144 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144: exit status 3 (18.586126073s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:42:12.050617   57974 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host
	E0719 15:42:12.050660   57974 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-817144" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-382231 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-382231 --alsologtostderr -v=3: exit status 82 (2m0.82930311s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-382231"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:39:53.276548   56954 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:39:53.277021   56954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:39:53.277040   56954 out.go:304] Setting ErrFile to fd 2...
	I0719 15:39:53.277048   56954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:39:53.277473   56954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:39:53.277960   56954 out.go:298] Setting JSON to false
	I0719 15:39:53.278092   56954 mustload.go:65] Loading cluster: no-preload-382231
	I0719 15:39:53.279050   56954 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:39:53.279418   56954 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:39:53.279629   56954 mustload.go:65] Loading cluster: no-preload-382231
	I0719 15:39:53.279826   56954 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:39:53.279874   56954 stop.go:39] StopHost: no-preload-382231
	I0719 15:39:53.280333   56954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:39:53.280389   56954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:39:53.296569   56954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0719 15:39:53.297013   56954 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:39:53.297796   56954 main.go:141] libmachine: Using API Version  1
	I0719 15:39:53.297825   56954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:39:53.298179   56954 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:39:53.300564   56954 out.go:177] * Stopping node "no-preload-382231"  ...
	I0719 15:39:53.301923   56954 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 15:39:53.302000   56954 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:39:53.302266   56954 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 15:39:53.302292   56954 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:39:53.305054   56954 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:39:53.305436   56954 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:39:53.305456   56954 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:39:53.305580   56954 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:39:53.305769   56954 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:39:53.305903   56954 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:39:53.306060   56954 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:39:53.431234   56954 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 15:39:53.494658   56954 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 15:39:53.557896   56954 main.go:141] libmachine: Stopping "no-preload-382231"...
	I0719 15:39:53.557963   56954 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:39:53.559718   56954 main.go:141] libmachine: (no-preload-382231) Calling .Stop
	I0719 15:39:53.563579   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 0/120
	I0719 15:39:54.564996   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 1/120
	I0719 15:39:55.566291   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 2/120
	I0719 15:39:56.567574   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 3/120
	I0719 15:39:57.568764   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 4/120
	I0719 15:39:58.570405   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 5/120
	I0719 15:39:59.571722   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 6/120
	I0719 15:40:00.573536   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 7/120
	I0719 15:40:01.574964   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 8/120
	I0719 15:40:02.576654   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 9/120
	I0719 15:40:03.578384   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 10/120
	I0719 15:40:04.580728   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 11/120
	I0719 15:40:05.583110   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 12/120
	I0719 15:40:06.584797   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 13/120
	I0719 15:40:07.586471   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 14/120
	I0719 15:40:08.588325   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 15/120
	I0719 15:40:09.589736   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 16/120
	I0719 15:40:10.591058   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 17/120
	I0719 15:40:11.592475   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 18/120
	I0719 15:40:12.593743   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 19/120
	I0719 15:40:13.595873   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 20/120
	I0719 15:40:14.597260   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 21/120
	I0719 15:40:15.598858   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 22/120
	I0719 15:40:16.600771   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 23/120
	I0719 15:40:17.602133   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 24/120
	I0719 15:40:18.604224   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 25/120
	I0719 15:40:19.605458   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 26/120
	I0719 15:40:20.606891   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 27/120
	I0719 15:40:21.608352   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 28/120
	I0719 15:40:22.609762   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 29/120
	I0719 15:40:23.611759   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 30/120
	I0719 15:40:24.612946   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 31/120
	I0719 15:40:25.614227   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 32/120
	I0719 15:40:26.615459   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 33/120
	I0719 15:40:27.616761   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 34/120
	I0719 15:40:28.619006   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 35/120
	I0719 15:40:29.620539   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 36/120
	I0719 15:40:30.622649   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 37/120
	I0719 15:40:31.624388   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 38/120
	I0719 15:40:32.625979   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 39/120
	I0719 15:40:33.923872   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 40/120
	I0719 15:40:34.925628   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 41/120
	I0719 15:40:35.926981   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 42/120
	I0719 15:40:36.928575   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 43/120
	I0719 15:40:37.930107   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 44/120
	I0719 15:40:38.931738   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 45/120
	I0719 15:40:39.933174   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 46/120
	I0719 15:40:40.934829   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 47/120
	I0719 15:40:41.936203   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 48/120
	I0719 15:40:42.937890   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 49/120
	I0719 15:40:43.940087   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 50/120
	I0719 15:40:44.942129   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 51/120
	I0719 15:40:45.943690   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 52/120
	I0719 15:40:46.944906   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 53/120
	I0719 15:40:47.946120   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 54/120
	I0719 15:40:48.947797   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 55/120
	I0719 15:40:49.949180   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 56/120
	I0719 15:40:50.950585   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 57/120
	I0719 15:40:51.951802   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 58/120
	I0719 15:40:52.953171   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 59/120
	I0719 15:40:53.955151   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 60/120
	I0719 15:40:54.956397   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 61/120
	I0719 15:40:55.957865   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 62/120
	I0719 15:40:56.959396   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 63/120
	I0719 15:40:57.961774   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 64/120
	I0719 15:40:58.963609   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 65/120
	I0719 15:40:59.964977   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 66/120
	I0719 15:41:00.966419   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 67/120
	I0719 15:41:01.968916   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 68/120
	I0719 15:41:02.970142   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 69/120
	I0719 15:41:03.971484   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 70/120
	I0719 15:41:04.973891   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 71/120
	I0719 15:41:05.975312   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 72/120
	I0719 15:41:06.976648   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 73/120
	I0719 15:41:07.978036   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 74/120
	I0719 15:41:08.979392   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 75/120
	I0719 15:41:09.981113   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 76/120
	I0719 15:41:10.982699   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 77/120
	I0719 15:41:11.984766   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 78/120
	I0719 15:41:12.986344   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 79/120
	I0719 15:41:13.988389   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 80/120
	I0719 15:41:14.989627   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 81/120
	I0719 15:41:15.990998   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 82/120
	I0719 15:41:16.992668   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 83/120
	I0719 15:41:17.993786   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 84/120
	I0719 15:41:18.995140   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 85/120
	I0719 15:41:19.997009   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 86/120
	I0719 15:41:20.998331   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 87/120
	I0719 15:41:21.999766   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 88/120
	I0719 15:41:23.000999   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 89/120
	I0719 15:41:24.003410   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 90/120
	I0719 15:41:25.004630   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 91/120
	I0719 15:41:26.005962   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 92/120
	I0719 15:41:27.007602   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 93/120
	I0719 15:41:28.008786   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 94/120
	I0719 15:41:29.010547   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 95/120
	I0719 15:41:30.012059   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 96/120
	I0719 15:41:31.013423   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 97/120
	I0719 15:41:32.014964   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 98/120
	I0719 15:41:33.016254   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 99/120
	I0719 15:41:34.018469   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 100/120
	I0719 15:41:35.019652   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 101/120
	I0719 15:41:36.021220   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 102/120
	I0719 15:41:37.022598   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 103/120
	I0719 15:41:38.023979   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 104/120
	I0719 15:41:39.025733   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 105/120
	I0719 15:41:40.026862   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 106/120
	I0719 15:41:41.028731   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 107/120
	I0719 15:41:42.030060   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 108/120
	I0719 15:41:43.031574   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 109/120
	I0719 15:41:44.033851   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 110/120
	I0719 15:41:45.034939   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 111/120
	I0719 15:41:46.036674   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 112/120
	I0719 15:41:47.038024   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 113/120
	I0719 15:41:48.039491   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 114/120
	I0719 15:41:49.041160   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 115/120
	I0719 15:41:50.042564   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 116/120
	I0719 15:41:51.043978   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 117/120
	I0719 15:41:52.045400   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 118/120
	I0719 15:41:53.046737   56954 main.go:141] libmachine: (no-preload-382231) Waiting for machine to stop 119/120
	I0719 15:41:54.047926   56954 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 15:41:54.048001   56954 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 15:41:54.049773   56954 out.go:177] 
	W0719 15:41:54.050801   56954 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 15:41:54.050820   56954 out.go:239] * 
	* 
	W0719 15:41:54.053614   56954 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:41:54.054683   56954 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-382231 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231: exit status 3 (18.506159195s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:42:12.562566   58003 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	E0719 15:42:12.562587   58003 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-382231" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-862924 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-862924 create -f testdata/busybox.yaml: exit status 1 (39.481767ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-862924" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-862924 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 6 (217.043986ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:41:44.952648   57851 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862924" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 6 (219.659303ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:41:45.172234   57882 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862924" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (93.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-862924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-862924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m33.31771835s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-862924 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-862924 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-862924 describe deploy/metrics-server -n kube-system: exit status 1 (42.730063ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-862924" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-862924 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 6 (218.228749ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:43:18.750807   58685 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-862924" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (93.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-601445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-601445 --alsologtostderr -v=3: exit status 82 (2m0.493717142s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-601445"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:42:07.924387   58152 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:42:07.924642   58152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:42:07.924653   58152 out.go:304] Setting ErrFile to fd 2...
	I0719 15:42:07.924660   58152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:42:07.924897   58152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:42:07.925138   58152 out.go:298] Setting JSON to false
	I0719 15:42:07.925231   58152 mustload.go:65] Loading cluster: default-k8s-diff-port-601445
	I0719 15:42:07.925555   58152 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:42:07.925636   58152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:42:07.925830   58152 mustload.go:65] Loading cluster: default-k8s-diff-port-601445
	I0719 15:42:07.925962   58152 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:42:07.925996   58152 stop.go:39] StopHost: default-k8s-diff-port-601445
	I0719 15:42:07.926417   58152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:42:07.926470   58152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:42:07.941541   58152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33839
	I0719 15:42:07.942024   58152 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:42:07.942621   58152 main.go:141] libmachine: Using API Version  1
	I0719 15:42:07.942646   58152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:42:07.942985   58152 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:42:07.945430   58152 out.go:177] * Stopping node "default-k8s-diff-port-601445"  ...
	I0719 15:42:07.946815   58152 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 15:42:07.946856   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:42:07.947084   58152 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 15:42:07.947113   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:42:07.949919   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:42:07.950258   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:40:48 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:42:07.950285   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:42:07.950422   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:42:07.950604   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:42:07.950789   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:42:07.950945   58152 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:42:08.045302   58152 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 15:42:08.106330   58152 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 15:42:08.167267   58152 main.go:141] libmachine: Stopping "default-k8s-diff-port-601445"...
	I0719 15:42:08.167312   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:42:08.168608   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Stop
	I0719 15:42:08.172014   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 0/120
	I0719 15:42:09.173510   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 1/120
	I0719 15:42:10.174925   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 2/120
	I0719 15:42:11.176566   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 3/120
	I0719 15:42:12.177805   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 4/120
	I0719 15:42:13.179618   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 5/120
	I0719 15:42:14.180956   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 6/120
	I0719 15:42:15.182597   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 7/120
	I0719 15:42:16.184245   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 8/120
	I0719 15:42:17.185626   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 9/120
	I0719 15:42:18.187062   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 10/120
	I0719 15:42:19.188734   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 11/120
	I0719 15:42:20.190218   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 12/120
	I0719 15:42:21.191397   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 13/120
	I0719 15:42:22.192864   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 14/120
	I0719 15:42:23.195149   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 15/120
	I0719 15:42:24.196612   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 16/120
	I0719 15:42:25.197947   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 17/120
	I0719 15:42:26.199538   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 18/120
	I0719 15:42:27.201124   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 19/120
	I0719 15:42:28.203417   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 20/120
	I0719 15:42:29.205008   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 21/120
	I0719 15:42:30.206520   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 22/120
	I0719 15:42:31.208018   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 23/120
	I0719 15:42:32.209583   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 24/120
	I0719 15:42:33.211832   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 25/120
	I0719 15:42:34.213343   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 26/120
	I0719 15:42:35.214639   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 27/120
	I0719 15:42:36.216214   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 28/120
	I0719 15:42:37.218013   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 29/120
	I0719 15:42:38.220210   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 30/120
	I0719 15:42:39.221926   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 31/120
	I0719 15:42:40.223248   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 32/120
	I0719 15:42:41.224606   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 33/120
	I0719 15:42:42.226137   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 34/120
	I0719 15:42:43.228227   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 35/120
	I0719 15:42:44.229873   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 36/120
	I0719 15:42:45.231202   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 37/120
	I0719 15:42:46.232549   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 38/120
	I0719 15:42:47.234180   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 39/120
	I0719 15:42:48.236444   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 40/120
	I0719 15:42:49.238124   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 41/120
	I0719 15:42:50.239313   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 42/120
	I0719 15:42:51.240806   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 43/120
	I0719 15:42:52.242411   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 44/120
	I0719 15:42:53.244704   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 45/120
	I0719 15:42:54.246338   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 46/120
	I0719 15:42:55.247703   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 47/120
	I0719 15:42:56.249129   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 48/120
	I0719 15:42:57.250905   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 49/120
	I0719 15:42:58.253120   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 50/120
	I0719 15:42:59.254752   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 51/120
	I0719 15:43:00.256606   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 52/120
	I0719 15:43:01.258118   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 53/120
	I0719 15:43:02.259716   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 54/120
	I0719 15:43:03.262143   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 55/120
	I0719 15:43:04.264191   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 56/120
	I0719 15:43:05.265946   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 57/120
	I0719 15:43:06.267505   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 58/120
	I0719 15:43:07.269300   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 59/120
	I0719 15:43:08.270875   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 60/120
	I0719 15:43:09.272652   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 61/120
	I0719 15:43:10.274354   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 62/120
	I0719 15:43:11.275941   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 63/120
	I0719 15:43:12.277337   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 64/120
	I0719 15:43:13.279366   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 65/120
	I0719 15:43:14.280759   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 66/120
	I0719 15:43:15.282267   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 67/120
	I0719 15:43:16.283761   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 68/120
	I0719 15:43:17.285235   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 69/120
	I0719 15:43:18.287367   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 70/120
	I0719 15:43:19.288616   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 71/120
	I0719 15:43:20.289939   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 72/120
	I0719 15:43:21.291482   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 73/120
	I0719 15:43:22.292827   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 74/120
	I0719 15:43:23.294717   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 75/120
	I0719 15:43:24.296101   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 76/120
	I0719 15:43:25.297634   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 77/120
	I0719 15:43:26.299151   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 78/120
	I0719 15:43:27.300607   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 79/120
	I0719 15:43:28.303116   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 80/120
	I0719 15:43:29.304416   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 81/120
	I0719 15:43:30.305719   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 82/120
	I0719 15:43:31.307326   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 83/120
	I0719 15:43:32.308700   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 84/120
	I0719 15:43:33.310975   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 85/120
	I0719 15:43:34.312503   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 86/120
	I0719 15:43:35.314053   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 87/120
	I0719 15:43:36.315616   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 88/120
	I0719 15:43:37.317319   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 89/120
	I0719 15:43:38.319570   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 90/120
	I0719 15:43:39.320965   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 91/120
	I0719 15:43:40.322279   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 92/120
	I0719 15:43:41.323602   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 93/120
	I0719 15:43:42.325166   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 94/120
	I0719 15:43:43.327015   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 95/120
	I0719 15:43:44.328218   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 96/120
	I0719 15:43:45.329517   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 97/120
	I0719 15:43:46.331081   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 98/120
	I0719 15:43:47.332503   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 99/120
	I0719 15:43:48.334705   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 100/120
	I0719 15:43:49.336168   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 101/120
	I0719 15:43:50.337589   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 102/120
	I0719 15:43:51.339145   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 103/120
	I0719 15:43:52.340639   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 104/120
	I0719 15:43:53.342708   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 105/120
	I0719 15:43:54.344380   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 106/120
	I0719 15:43:55.345863   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 107/120
	I0719 15:43:56.347306   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 108/120
	I0719 15:43:57.348761   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 109/120
	I0719 15:43:58.350966   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 110/120
	I0719 15:43:59.352407   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 111/120
	I0719 15:44:00.353782   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 112/120
	I0719 15:44:01.355425   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 113/120
	I0719 15:44:02.356727   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 114/120
	I0719 15:44:03.358802   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 115/120
	I0719 15:44:04.360131   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 116/120
	I0719 15:44:05.361397   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 117/120
	I0719 15:44:06.363086   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 118/120
	I0719 15:44:07.364465   58152 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for machine to stop 119/120
	I0719 15:44:08.365630   58152 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 15:44:08.365698   58152 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 15:44:08.367880   58152 out.go:177] 
	W0719 15:44:08.369329   58152 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 15:44:08.369349   58152 out.go:239] * 
	* 
	W0719 15:44:08.372023   58152 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:44:08.373364   58152 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-601445 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445: exit status 3 (18.587589471s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:44:26.962505   59001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	E0719 15:44:26.962527   59001 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601445" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144: exit status 3 (3.167765783s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:42:15.218577   58170 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host
	E0719 15:42:15.218595   58170 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-817144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-817144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153317235s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-817144 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144: exit status 3 (3.063045412s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:42:24.434632   58300 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host
	E0719 15:42:24.434651   58300 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.37:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-817144" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231: exit status 3 (3.167515903s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:42:15.730576   58216 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	E0719 15:42:15.730594   58216 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-382231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-382231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153034656s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-382231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231: exit status 3 (3.062733524s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:42:24.946630   58330 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host
	E0719 15:42:24.946653   58330 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-382231" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (744.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-862924 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-862924 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m21.094374965s)

                                                
                                                
-- stdout --
	* [old-k8s-version-862924] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-862924" primary control-plane node in "old-k8s-version-862924" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:43:23.260594   58817 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:43:23.260708   58817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:43:23.260716   58817 out.go:304] Setting ErrFile to fd 2...
	I0719 15:43:23.260720   58817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:43:23.260879   58817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:43:23.261344   58817 out.go:298] Setting JSON to false
	I0719 15:43:23.262228   58817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5149,"bootTime":1721398654,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:43:23.262304   58817 start.go:139] virtualization: kvm guest
	I0719 15:43:23.264528   58817 out.go:177] * [old-k8s-version-862924] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:43:23.265941   58817 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:43:23.265939   58817 notify.go:220] Checking for updates...
	I0719 15:43:23.267348   58817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:43:23.268733   58817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:43:23.269998   58817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:43:23.271163   58817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:43:23.272396   58817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:43:23.274124   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:43:23.274534   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:43:23.274583   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:43:23.289722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0719 15:43:23.290173   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:43:23.290823   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:43:23.290855   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:43:23.291145   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:43:23.291346   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:43:23.293222   58817 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 15:43:23.294640   58817 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:43:23.294978   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:43:23.295024   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:43:23.309436   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45703
	I0719 15:43:23.309819   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:43:23.310340   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:43:23.310362   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:43:23.310632   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:43:23.310813   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:43:23.344384   58817 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:43:23.345643   58817 start.go:297] selected driver: kvm2
	I0719 15:43:23.345658   58817 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:43:23.345788   58817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:43:23.346481   58817 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:43:23.346552   58817 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:43:23.360541   58817 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:43:23.360909   58817 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:43:23.360975   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:43:23.360992   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:43:23.361041   58817 start.go:340] cluster config:
	{Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:43:23.361147   58817 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:43:23.362814   58817 out.go:177] * Starting "old-k8s-version-862924" primary control-plane node in "old-k8s-version-862924" cluster
	I0719 15:43:23.364069   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:43:23.364100   58817 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 15:43:23.364109   58817 cache.go:56] Caching tarball of preloaded images
	I0719 15:43:23.364191   58817 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:43:23.364203   58817 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 15:43:23.364296   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:43:23.364469   58817 start.go:360] acquireMachinesLock for old-k8s-version-862924: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	* 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	* 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-862924 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (227.564704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-862924 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-862924 logs -n 25: (1.584942158s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-127438 -- sudo                         | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-127438                                 | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-574044                           | kubernetes-upgrade-574044    | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:44:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:44:39.385142   59208 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:44:39.385249   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385257   59208 out.go:304] Setting ErrFile to fd 2...
	I0719 15:44:39.385261   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385405   59208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:44:39.385919   59208 out.go:298] Setting JSON to false
	I0719 15:44:39.386767   59208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5225,"bootTime":1721398654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:44:39.386817   59208 start.go:139] virtualization: kvm guest
	I0719 15:44:39.390104   59208 out.go:177] * [default-k8s-diff-port-601445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:44:39.391867   59208 notify.go:220] Checking for updates...
	I0719 15:44:39.391890   59208 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:44:39.393463   59208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:44:39.394883   59208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:44:39.396081   59208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:44:39.397280   59208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:44:39.398540   59208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:44:39.400177   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:44:39.400543   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.400601   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.415749   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0719 15:44:39.416104   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.416644   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.416664   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.416981   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.417206   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.417443   59208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:44:39.417751   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.417793   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.432550   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0719 15:44:39.433003   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.433478   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.433504   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.433836   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.434083   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.467474   59208 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:44:38.674498   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:39.468897   59208 start.go:297] selected driver: kvm2
	I0719 15:44:39.468921   59208 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.469073   59208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:44:39.470083   59208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.470178   59208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:44:39.485232   59208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:44:39.485586   59208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:44:39.485616   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:44:39.485624   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:44:39.485666   59208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.485752   59208 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.487537   59208 out.go:177] * Starting "default-k8s-diff-port-601445" primary control-plane node in "default-k8s-diff-port-601445" cluster
	I0719 15:44:39.488672   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:44:39.488709   59208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:44:39.488718   59208 cache.go:56] Caching tarball of preloaded images
	I0719 15:44:39.488795   59208 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:44:39.488807   59208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:44:39.488895   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:44:39.489065   59208 start.go:360] acquireMachinesLock for default-k8s-diff-port-601445: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:44:41.746585   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:47.826521   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:50.898507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:56.978531   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:00.050437   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:06.130631   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:09.202570   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:15.282481   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:18.354537   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:24.434488   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:27.506515   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:33.586522   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:36.658503   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:42.738573   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:45.810538   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:51.890547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:54.962507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:01.042509   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:04.114621   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:10.194576   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:13.266450   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:19.346524   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:22.418506   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:28.498553   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:31.570507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:37.650477   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:40.722569   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:46.802495   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:49.874579   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:55.954547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:59.026454   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:47:02.030619   58417 start.go:364] duration metric: took 4m36.939495617s to acquireMachinesLock for "no-preload-382231"
	I0719 15:47:02.030679   58417 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:02.030685   58417 fix.go:54] fixHost starting: 
	I0719 15:47:02.031010   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:02.031039   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:02.046256   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0719 15:47:02.046682   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:02.047151   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:47:02.047178   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:02.047573   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:02.047818   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:02.048023   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:47:02.049619   58417 fix.go:112] recreateIfNeeded on no-preload-382231: state=Stopped err=<nil>
	I0719 15:47:02.049641   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	W0719 15:47:02.049785   58417 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:02.051800   58417 out.go:177] * Restarting existing kvm2 VM for "no-preload-382231" ...
	I0719 15:47:02.028090   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:02.028137   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028489   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:47:02.028517   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:47:02.030488   58376 machine.go:97] duration metric: took 4m37.428160404s to provisionDockerMachine
	I0719 15:47:02.030529   58376 fix.go:56] duration metric: took 4m37.450063037s for fixHost
	I0719 15:47:02.030535   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 4m37.450081944s
	W0719 15:47:02.030559   58376 start.go:714] error starting host: provision: host is not running
	W0719 15:47:02.030673   58376 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 15:47:02.030686   58376 start.go:729] Will try again in 5 seconds ...
	I0719 15:47:02.053160   58417 main.go:141] libmachine: (no-preload-382231) Calling .Start
	I0719 15:47:02.053325   58417 main.go:141] libmachine: (no-preload-382231) Ensuring networks are active...
	I0719 15:47:02.054289   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network default is active
	I0719 15:47:02.054786   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network mk-no-preload-382231 is active
	I0719 15:47:02.055259   58417 main.go:141] libmachine: (no-preload-382231) Getting domain xml...
	I0719 15:47:02.056202   58417 main.go:141] libmachine: (no-preload-382231) Creating domain...
	I0719 15:47:03.270495   58417 main.go:141] libmachine: (no-preload-382231) Waiting to get IP...
	I0719 15:47:03.271595   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.272074   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.272151   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.272057   59713 retry.go:31] will retry after 239.502065ms: waiting for machine to come up
	I0719 15:47:03.513745   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.514224   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.514264   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.514191   59713 retry.go:31] will retry after 315.982717ms: waiting for machine to come up
	I0719 15:47:03.831739   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.832155   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.832187   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.832111   59713 retry.go:31] will retry after 468.820113ms: waiting for machine to come up
	I0719 15:47:04.302865   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.303273   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.303306   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.303236   59713 retry.go:31] will retry after 526.764683ms: waiting for machine to come up
	I0719 15:47:04.832048   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.832551   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.832583   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.832504   59713 retry.go:31] will retry after 754.533212ms: waiting for machine to come up
	I0719 15:47:07.032310   58376 start.go:360] acquireMachinesLock for embed-certs-817144: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:05.588374   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:05.588834   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:05.588862   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:05.588785   59713 retry.go:31] will retry after 757.18401ms: waiting for machine to come up
	I0719 15:47:06.347691   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:06.348135   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:06.348164   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:06.348053   59713 retry.go:31] will retry after 1.097437331s: waiting for machine to come up
	I0719 15:47:07.446836   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:07.447199   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:07.447219   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:07.447158   59713 retry.go:31] will retry after 1.448513766s: waiting for machine to come up
	I0719 15:47:08.897886   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:08.898289   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:08.898317   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:08.898216   59713 retry.go:31] will retry after 1.583843671s: waiting for machine to come up
	I0719 15:47:10.483476   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:10.483934   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:10.483963   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:10.483864   59713 retry.go:31] will retry after 1.86995909s: waiting for machine to come up
	I0719 15:47:12.355401   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:12.355802   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:12.355827   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:12.355762   59713 retry.go:31] will retry after 2.577908462s: waiting for machine to come up
	I0719 15:47:14.934837   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:14.935263   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:14.935285   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:14.935225   59713 retry.go:31] will retry after 3.158958575s: waiting for machine to come up
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:18.095456   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095912   58417 main.go:141] libmachine: (no-preload-382231) Found IP for machine: 192.168.39.227
	I0719 15:47:18.095936   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has current primary IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095942   58417 main.go:141] libmachine: (no-preload-382231) Reserving static IP address...
	I0719 15:47:18.096317   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.096357   58417 main.go:141] libmachine: (no-preload-382231) Reserved static IP address: 192.168.39.227
	I0719 15:47:18.096376   58417 main.go:141] libmachine: (no-preload-382231) DBG | skip adding static IP to network mk-no-preload-382231 - found existing host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"}
	I0719 15:47:18.096392   58417 main.go:141] libmachine: (no-preload-382231) DBG | Getting to WaitForSSH function...
	I0719 15:47:18.096407   58417 main.go:141] libmachine: (no-preload-382231) Waiting for SSH to be available...
	I0719 15:47:18.098619   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.098978   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.099008   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.099122   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH client type: external
	I0719 15:47:18.099151   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa (-rw-------)
	I0719 15:47:18.099183   58417 main.go:141] libmachine: (no-preload-382231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:18.099196   58417 main.go:141] libmachine: (no-preload-382231) DBG | About to run SSH command:
	I0719 15:47:18.099210   58417 main.go:141] libmachine: (no-preload-382231) DBG | exit 0
	I0719 15:47:18.222285   58417 main.go:141] libmachine: (no-preload-382231) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:18.222607   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetConfigRaw
	I0719 15:47:18.223181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.225751   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226062   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.226105   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226327   58417 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:47:18.226504   58417 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:18.226520   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:18.226684   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.228592   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.228936   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.228960   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.229094   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.229246   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229398   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229516   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.229663   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.229887   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.229901   58417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:18.330731   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:18.330764   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331053   58417 buildroot.go:166] provisioning hostname "no-preload-382231"
	I0719 15:47:18.331084   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331282   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.333905   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334212   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.334270   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334331   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.334510   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334705   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334850   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.335030   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.335216   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.335230   58417 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-382231 && echo "no-preload-382231" | sudo tee /etc/hostname
	I0719 15:47:18.453128   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-382231
	
	I0719 15:47:18.453151   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.455964   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456323   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.456349   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456549   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.456822   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457010   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457158   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.457300   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.457535   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.457561   58417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-382231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-382231/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-382231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:18.568852   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:18.568878   58417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:18.568902   58417 buildroot.go:174] setting up certificates
	I0719 15:47:18.568915   58417 provision.go:84] configureAuth start
	I0719 15:47:18.568924   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.569240   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.571473   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.571757   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.571783   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.572029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.573941   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574213   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.574247   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574393   58417 provision.go:143] copyHostCerts
	I0719 15:47:18.574455   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:18.574465   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:18.574528   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:18.574615   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:18.574622   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:18.574645   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:18.574696   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:18.574703   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:18.574722   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:18.574768   58417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.no-preload-382231 san=[127.0.0.1 192.168.39.227 localhost minikube no-preload-382231]
	I0719 15:47:18.636408   58417 provision.go:177] copyRemoteCerts
	I0719 15:47:18.636458   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:18.636477   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.638719   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639021   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.639054   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639191   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.639379   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.639532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.639795   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:18.720305   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:18.742906   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:18.764937   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:47:18.787183   58417 provision.go:87] duration metric: took 218.257504ms to configureAuth
	I0719 15:47:18.787205   58417 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:18.787355   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:47:18.787418   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.789685   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.789992   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.790017   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.790181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.790366   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790632   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.790770   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.790929   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.790943   58417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:19.053326   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:19.053350   58417 machine.go:97] duration metric: took 826.83404ms to provisionDockerMachine
	I0719 15:47:19.053364   58417 start.go:293] postStartSetup for "no-preload-382231" (driver="kvm2")
	I0719 15:47:19.053379   58417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:19.053409   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.053733   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:19.053755   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.056355   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056709   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.056737   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056884   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.057037   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.057172   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.057370   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.136785   58417 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:19.140756   58417 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:19.140777   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:19.140847   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:19.140941   58417 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:19.141044   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:19.150247   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:19.172800   58417 start.go:296] duration metric: took 119.424607ms for postStartSetup
	I0719 15:47:19.172832   58417 fix.go:56] duration metric: took 17.142146552s for fixHost
	I0719 15:47:19.172849   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.175427   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.175816   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.175851   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.176027   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.176281   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176636   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.176892   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:19.177051   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:19.177061   58417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:19.278564   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404039.251890495
	
	I0719 15:47:19.278594   58417 fix.go:216] guest clock: 1721404039.251890495
	I0719 15:47:19.278605   58417 fix.go:229] Guest: 2024-07-19 15:47:19.251890495 +0000 UTC Remote: 2024-07-19 15:47:19.172835531 +0000 UTC m=+294.220034318 (delta=79.054964ms)
	I0719 15:47:19.278651   58417 fix.go:200] guest clock delta is within tolerance: 79.054964ms
	I0719 15:47:19.278659   58417 start.go:83] releasing machines lock for "no-preload-382231", held for 17.247997118s
	I0719 15:47:19.278692   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.279029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:19.281674   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282034   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.282063   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282221   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282750   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282935   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282991   58417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:19.283061   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.283095   58417 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:19.283116   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.285509   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285805   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.285828   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285846   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285959   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286182   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286276   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.286300   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.286468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286481   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286632   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.286672   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286806   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286935   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.363444   58417 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:19.387514   58417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:19.545902   58417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:19.551747   58417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:19.551812   58417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:19.568563   58417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:19.568589   58417 start.go:495] detecting cgroup driver to use...
	I0719 15:47:19.568654   58417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:19.589440   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:19.604889   58417 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:19.604962   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:19.624114   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:19.638265   58417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:19.752880   58417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:19.900078   58417 docker.go:233] disabling docker service ...
	I0719 15:47:19.900132   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:19.914990   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:19.928976   58417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:20.079363   58417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:20.203629   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:20.218502   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:20.237028   58417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 15:47:20.237089   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.248514   58417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:20.248597   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.260162   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.272166   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.283341   58417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:20.294687   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.305495   58417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.328024   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.339666   58417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:20.349271   58417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:20.349314   58417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:20.364130   58417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:20.376267   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:20.501259   58417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:20.643763   58417 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:20.643828   58417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:20.648525   58417 start.go:563] Will wait 60s for crictl version
	I0719 15:47:20.648586   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:20.652256   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:20.689386   58417 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:20.689468   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.720662   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.751393   58417 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:20.752939   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:20.755996   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756367   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:20.756395   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756723   58417 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:20.760962   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:20.776973   58417 kubeadm.go:883] updating cluster {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:20.777084   58417 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 15:47:20.777120   58417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:20.814520   58417 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 15:47:20.814547   58417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:20.814631   58417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:20.814650   58417 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.814657   58417 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.814682   58417 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.814637   58417 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.814736   58417 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.814808   58417 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.814742   58417 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.816435   58417 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.816446   58417 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.816513   58417 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.816535   58417 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816559   58417 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.816719   58417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:21.003845   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 15:47:21.028954   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.039628   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.041391   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.065499   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.084966   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.142812   58417 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 15:47:21.142873   58417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 15:47:21.142905   58417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.142921   58417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 15:47:21.142939   58417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.142962   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142877   58417 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.143025   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142983   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.160141   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.182875   58417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 15:47:21.182918   58417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.182945   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.182958   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.182957   58417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 15:47:21.182992   58417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.183029   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.183044   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.183064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.272688   58417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 15:47:21.272724   58417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.272768   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.272783   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272825   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.272876   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272906   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 15:47:21.272931   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.272971   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:21.272997   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 15:47:21.273064   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:21.326354   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326356   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.326441   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 15:47:21.326457   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326459   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326492   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326497   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.326529   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 15:47:21.326535   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 15:47:21.326633   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.363401   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:21.363496   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:22.268448   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.010876   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.684346805s)
	I0719 15:47:24.010910   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 15:47:24.010920   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.684439864s)
	I0719 15:47:24.010952   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 15:47:24.010930   58417 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.010993   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.684342001s)
	I0719 15:47:24.011014   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.011019   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011046   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.647533327s)
	I0719 15:47:24.011066   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011098   58417 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742620594s)
	I0719 15:47:24.011137   58417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 15:47:24.011170   58417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.011204   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:27.292973   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.281931356s)
	I0719 15:47:27.293008   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 15:47:27.293001   58417 ssh_runner.go:235] Completed: which crictl: (3.281778521s)
	I0719 15:47:27.293043   58417 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:27.293064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:27.293086   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:29.269642   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976526914s)
	I0719 15:47:29.269676   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 15:47:29.269698   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269641   58417 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.97655096s)
	I0719 15:47:29.269748   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269773   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 15:47:29.269875   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:31.242199   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.972421845s)
	I0719 15:47:31.242257   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 15:47:31.242273   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972374564s)
	I0719 15:47:31.242283   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:31.242306   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 15:47:31.242334   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:32.592736   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.350379333s)
	I0719 15:47:32.592762   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 15:47:32.592782   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:32.592817   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:34.547084   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954243196s)
	I0719 15:47:34.547122   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 15:47:34.547155   58417 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:34.547231   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.759098   59208 start.go:364] duration metric: took 2m59.27000152s to acquireMachinesLock for "default-k8s-diff-port-601445"
	I0719 15:47:38.759165   59208 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:38.759176   59208 fix.go:54] fixHost starting: 
	I0719 15:47:38.759633   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:38.759685   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:38.779587   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0719 15:47:38.779979   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:38.780480   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:47:38.780497   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:38.780888   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:38.781129   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:38.781260   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:47:38.782786   59208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601445: state=Stopped err=<nil>
	I0719 15:47:38.782860   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	W0719 15:47:38.783056   59208 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:38.785037   59208 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-601445" ...
	I0719 15:47:38.786497   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Start
	I0719 15:47:38.786691   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring networks are active...
	I0719 15:47:38.787520   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network default is active
	I0719 15:47:38.787819   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network mk-default-k8s-diff-port-601445 is active
	I0719 15:47:38.788418   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Getting domain xml...
	I0719 15:47:38.789173   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Creating domain...
	I0719 15:47:35.191148   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 15:47:35.191193   58417 cache_images.go:123] Successfully loaded all cached images
	I0719 15:47:35.191198   58417 cache_images.go:92] duration metric: took 14.376640053s to LoadCachedImages
	I0719 15:47:35.191209   58417 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0-beta.0 crio true true} ...
	I0719 15:47:35.191329   58417 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-382231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:35.191424   58417 ssh_runner.go:195] Run: crio config
	I0719 15:47:35.236248   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:35.236276   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:35.236288   58417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:35.236309   58417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-382231 NodeName:no-preload-382231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:47:35.236464   58417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-382231"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:35.236525   58417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 15:47:35.247524   58417 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:35.247611   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:35.257583   58417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 15:47:35.275057   58417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 15:47:35.291468   58417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 15:47:35.308021   58417 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:35.312121   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:35.324449   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:35.451149   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:35.477844   58417 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231 for IP: 192.168.39.227
	I0719 15:47:35.477868   58417 certs.go:194] generating shared ca certs ...
	I0719 15:47:35.477887   58417 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:35.478043   58417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:35.478093   58417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:35.478103   58417 certs.go:256] generating profile certs ...
	I0719 15:47:35.478174   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.key
	I0719 15:47:35.478301   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key.46f9a235
	I0719 15:47:35.478339   58417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key
	I0719 15:47:35.478482   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:35.478520   58417 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:35.478530   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:35.478549   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:35.478569   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:35.478591   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:35.478628   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:35.479291   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:35.523106   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:35.546934   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:35.585616   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:35.617030   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:47:35.641486   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:47:35.680051   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:35.703679   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:47:35.728088   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:35.751219   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:35.774149   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:35.796985   58417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:35.813795   58417 ssh_runner.go:195] Run: openssl version
	I0719 15:47:35.819568   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:35.830350   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834792   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834847   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.840531   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:35.851584   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:35.862655   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867139   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867199   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.872916   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:35.883986   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:35.894795   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899001   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899049   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.904496   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:35.915180   58417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:35.919395   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:35.926075   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:35.931870   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:35.938089   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:35.944079   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:35.950449   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:35.956291   58417 kubeadm.go:392] StartCluster: {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:35.956396   58417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:35.956452   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:35.993976   58417 cri.go:89] found id: ""
	I0719 15:47:35.994047   58417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:36.004507   58417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:36.004532   58417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:36.004579   58417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:36.014644   58417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:36.015628   58417 kubeconfig.go:125] found "no-preload-382231" server: "https://192.168.39.227:8443"
	I0719 15:47:36.017618   58417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:36.027252   58417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0719 15:47:36.027281   58417 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:36.027292   58417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:36.027350   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:36.066863   58417 cri.go:89] found id: ""
	I0719 15:47:36.066934   58417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:36.082971   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:36.092782   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:36.092802   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:36.092841   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:36.101945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:36.101998   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:36.111368   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:36.120402   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:36.120447   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:36.130124   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.138945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:36.138990   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.148176   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:36.157008   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:36.157060   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:36.166273   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:36.176032   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:36.291855   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.285472   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.476541   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.547807   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.652551   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:37.652649   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.153088   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.653690   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.718826   58417 api_server.go:72] duration metric: took 1.066275053s to wait for apiserver process to appear ...
	I0719 15:47:38.718858   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:47:38.718891   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:41.984204   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:41.984237   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:41.984255   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.031024   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:42.031055   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:42.219815   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.256851   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.256888   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:42.719015   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.756668   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.756705   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.219173   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.255610   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:43.255645   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.719116   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.725453   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:47:43.739070   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:47:43.739108   58417 api_server.go:131] duration metric: took 5.020238689s to wait for apiserver health ...
	I0719 15:47:43.739119   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:43.739128   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:43.741458   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:47:40.069048   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting to get IP...
	I0719 15:47:40.069866   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070409   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070480   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.070379   59996 retry.go:31] will retry after 299.168281ms: waiting for machine to come up
	I0719 15:47:40.370939   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371381   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.371340   59996 retry.go:31] will retry after 388.345842ms: waiting for machine to come up
	I0719 15:47:40.761301   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762861   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762889   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.762797   59996 retry.go:31] will retry after 305.39596ms: waiting for machine to come up
	I0719 15:47:41.070215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070791   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070823   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.070746   59996 retry.go:31] will retry after 452.50233ms: waiting for machine to come up
	I0719 15:47:41.525465   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.525997   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.526019   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.525920   59996 retry.go:31] will retry after 686.050268ms: waiting for machine to come up
	I0719 15:47:42.214012   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214513   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214545   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:42.214465   59996 retry.go:31] will retry after 867.815689ms: waiting for machine to come up
	I0719 15:47:43.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084240   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:43.084198   59996 retry.go:31] will retry after 1.006018507s: waiting for machine to come up
	I0719 15:47:44.092571   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093050   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:44.092992   59996 retry.go:31] will retry after 961.604699ms: waiting for machine to come up
	I0719 15:47:43.743125   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:47:43.780558   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:47:43.825123   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:47:43.849564   58417 system_pods.go:59] 8 kube-system pods found
	I0719 15:47:43.849608   58417 system_pods.go:61] "coredns-5cfdc65f69-9p4dr" [b6744bc9-b683-4f7e-b506-a95eb58ac308] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:47:43.849620   58417 system_pods.go:61] "etcd-no-preload-382231" [1f2704ae-84a0-4636-9826-f6bb5d2cb8b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:47:43.849632   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [e4ae90fb-9024-4420-9249-6f936ff43894] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:47:43.849643   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [ceb3538d-a6b9-4135-b044-b139003baf35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:47:43.849650   58417 system_pods.go:61] "kube-proxy-z2z9r" [fdc0eb8f-2884-436b-ba1e-4c71107f756c] Running
	I0719 15:47:43.849657   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [5ae3221b-7186-4dbe-9b1b-fb4c8c239c62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:47:43.849677   58417 system_pods.go:61] "metrics-server-78fcd8795b-zwr8g" [4d4de9aa-89f2-4cf4-85c2-26df25bd82c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:47:43.849687   58417 system_pods.go:61] "storage-provisioner" [ab5ce17f-a0da-4ab7-803e-245ba4363d09] Running
	I0719 15:47:43.849696   58417 system_pods.go:74] duration metric: took 24.54438ms to wait for pod list to return data ...
	I0719 15:47:43.849709   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:47:43.864512   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:47:43.864636   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:47:43.864684   58417 node_conditions.go:105] duration metric: took 14.967708ms to run NodePressure ...
	I0719 15:47:43.864727   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:44.524399   58417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531924   58417 kubeadm.go:739] kubelet initialised
	I0719 15:47:44.531944   58417 kubeadm.go:740] duration metric: took 7.516197ms waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531952   58417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:47:44.538016   58417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:45.055856   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056318   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056347   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:45.056263   59996 retry.go:31] will retry after 1.300059023s: waiting for machine to come up
	I0719 15:47:46.357875   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358379   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358407   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:46.358331   59996 retry.go:31] will retry after 2.269558328s: waiting for machine to come up
	I0719 15:47:48.630965   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631641   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631674   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:48.631546   59996 retry.go:31] will retry after 2.829487546s: waiting for machine to come up
	I0719 15:47:47.449778   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:48.045481   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:48.045508   58417 pod_ready.go:81] duration metric: took 3.507466621s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.045521   58417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.463569   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464003   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:51.463968   59996 retry.go:31] will retry after 2.917804786s: waiting for machine to come up
	I0719 15:47:54.383261   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383967   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383993   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:54.383924   59996 retry.go:31] will retry after 4.044917947s: waiting for machine to come up
	I0719 15:47:50.052168   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:51.052114   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:51.052135   58417 pod_ready.go:81] duration metric: took 3.006607122s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:51.052144   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059540   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:52.059563   58417 pod_ready.go:81] duration metric: took 1.007411773s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059576   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.066338   58417 pod_ready.go:102] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:54.567056   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.567076   58417 pod_ready.go:81] duration metric: took 2.507493559s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.567085   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571655   58417 pod_ready.go:92] pod "kube-proxy-z2z9r" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.571672   58417 pod_ready.go:81] duration metric: took 4.581191ms for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571680   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.575983   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.576005   58417 pod_ready.go:81] duration metric: took 4.315788ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.576017   58417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.432420   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432945   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Found IP for machine: 192.168.61.144
	I0719 15:47:58.432976   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has current primary IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432988   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserving static IP address...
	I0719 15:47:58.433361   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.433395   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | skip adding static IP to network mk-default-k8s-diff-port-601445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"}
	I0719 15:47:58.433412   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserved static IP address: 192.168.61.144
	I0719 15:47:58.433430   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for SSH to be available...
	I0719 15:47:58.433442   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Getting to WaitForSSH function...
	I0719 15:47:58.435448   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435770   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.435807   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435868   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH client type: external
	I0719 15:47:58.435930   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa (-rw-------)
	I0719 15:47:58.435973   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:58.435992   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | About to run SSH command:
	I0719 15:47:58.436002   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | exit 0
	I0719 15:47:58.562187   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:58.562564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetConfigRaw
	I0719 15:47:58.563233   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.565694   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566042   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.566066   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566301   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:47:58.566469   59208 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:58.566489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:58.566684   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.569109   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569485   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.569512   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569594   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.569763   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.569912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.570022   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.570167   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.570398   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.570412   59208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:58.675164   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:58.675217   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675455   59208 buildroot.go:166] provisioning hostname "default-k8s-diff-port-601445"
	I0719 15:47:58.675487   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.678103   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678522   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.678564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678721   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.678908   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679074   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679198   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.679345   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.679516   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.679531   59208 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-601445 && echo "default-k8s-diff-port-601445" | sudo tee /etc/hostname
	I0719 15:47:58.802305   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-601445
	
	I0719 15:47:58.802336   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.805215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805582   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.805613   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805796   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.805981   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806139   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806322   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.806517   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.806689   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.806706   59208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-601445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-601445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-601445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:58.919959   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:58.919985   59208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:58.920019   59208 buildroot.go:174] setting up certificates
	I0719 15:47:58.920031   59208 provision.go:84] configureAuth start
	I0719 15:47:58.920041   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.920283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.922837   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923193   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.923225   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923413   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.925832   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926128   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.926156   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926297   59208 provision.go:143] copyHostCerts
	I0719 15:47:58.926360   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:58.926374   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:58.926425   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:58.926512   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:58.926520   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:58.926543   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:58.926600   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:58.926609   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:58.926630   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:58.926682   59208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-601445 san=[127.0.0.1 192.168.61.144 default-k8s-diff-port-601445 localhost minikube]
	I0719 15:47:59.080911   59208 provision.go:177] copyRemoteCerts
	I0719 15:47:59.080966   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:59.080990   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084029   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.084059   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084219   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.084411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.084531   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.084674   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.172754   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:59.198872   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 15:47:59.222898   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:47:59.246017   59208 provision.go:87] duration metric: took 325.975105ms to configureAuth
	I0719 15:47:59.246037   59208 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:59.246215   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:47:59.246312   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.248757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249079   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.249111   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249354   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.249526   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249679   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249779   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.249924   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.250142   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.250161   59208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:59.743101   58376 start.go:364] duration metric: took 52.710718223s to acquireMachinesLock for "embed-certs-817144"
	I0719 15:47:59.743169   58376 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:59.743177   58376 fix.go:54] fixHost starting: 
	I0719 15:47:59.743553   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:59.743591   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:59.760837   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0719 15:47:59.761216   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:59.761734   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:47:59.761754   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:59.762080   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:59.762291   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:47:59.762504   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:47:59.764044   58376 fix.go:112] recreateIfNeeded on embed-certs-817144: state=Stopped err=<nil>
	I0719 15:47:59.764067   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	W0719 15:47:59.764217   58376 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:59.766063   58376 out.go:177] * Restarting existing kvm2 VM for "embed-certs-817144" ...
	I0719 15:47:56.582753   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:58.583049   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:59.508289   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:59.508327   59208 machine.go:97] duration metric: took 941.842272ms to provisionDockerMachine
	I0719 15:47:59.508343   59208 start.go:293] postStartSetup for "default-k8s-diff-port-601445" (driver="kvm2")
	I0719 15:47:59.508359   59208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:59.508383   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.508687   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:59.508720   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.511449   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.511887   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.511911   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.512095   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.512275   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.512437   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.512580   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.596683   59208 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:59.600761   59208 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:59.600782   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:59.600841   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:59.600911   59208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:59.600996   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:59.609867   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:59.633767   59208 start.go:296] duration metric: took 125.408568ms for postStartSetup
	I0719 15:47:59.633803   59208 fix.go:56] duration metric: took 20.874627736s for fixHost
	I0719 15:47:59.633825   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.636600   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.636944   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.636977   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.637121   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.637328   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637495   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637640   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.637811   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.637989   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.637999   59208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:59.742929   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404079.728807147
	
	I0719 15:47:59.742957   59208 fix.go:216] guest clock: 1721404079.728807147
	I0719 15:47:59.742967   59208 fix.go:229] Guest: 2024-07-19 15:47:59.728807147 +0000 UTC Remote: 2024-07-19 15:47:59.633807395 +0000 UTC m=+200.280673126 (delta=94.999752ms)
	I0719 15:47:59.743008   59208 fix.go:200] guest clock delta is within tolerance: 94.999752ms
	I0719 15:47:59.743013   59208 start.go:83] releasing machines lock for "default-k8s-diff-port-601445", held for 20.983876369s
	I0719 15:47:59.743040   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.743262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:59.746145   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746501   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.746534   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746662   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747297   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747461   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747553   59208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:59.747603   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.747714   59208 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:59.747738   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.750268   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750583   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750751   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750916   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750932   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.750942   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.751127   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751170   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.751269   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751353   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751421   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.751489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751646   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.834888   59208 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:59.859285   59208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:00.009771   59208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:00.015906   59208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:00.015973   59208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:00.032129   59208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:00.032150   59208 start.go:495] detecting cgroup driver to use...
	I0719 15:48:00.032215   59208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:00.050052   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:00.063282   59208 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:00.063341   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:00.078073   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:00.092872   59208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:00.217105   59208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:00.364335   59208 docker.go:233] disabling docker service ...
	I0719 15:48:00.364403   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:00.384138   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:00.400280   59208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:00.543779   59208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:00.671512   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:00.687337   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:00.708629   59208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:00.708690   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.720508   59208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:00.720580   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.732952   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.743984   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.756129   59208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:00.766873   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.777481   59208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.799865   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.812450   59208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:00.822900   59208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:00.822964   59208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:00.836117   59208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:00.845958   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:00.959002   59208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:01.104519   59208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:01.104598   59208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:01.110652   59208 start.go:563] Will wait 60s for crictl version
	I0719 15:48:01.110711   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:48:01.114358   59208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:01.156969   59208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:01.157063   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.187963   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.219925   59208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.221101   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:48:01.224369   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:01.224789   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224989   59208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:01.229813   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:01.243714   59208 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:01.243843   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:01.243886   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:01.283013   59208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:01.283093   59208 ssh_runner.go:195] Run: which lz4
	I0719 15:48:01.287587   59208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:01.291937   59208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:01.291965   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:02.810751   59208 crio.go:462] duration metric: took 1.52319928s to copy over tarball
	I0719 15:48:02.810846   59208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:59.767270   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Start
	I0719 15:47:59.767433   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring networks are active...
	I0719 15:47:59.768056   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network default is active
	I0719 15:47:59.768371   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network mk-embed-certs-817144 is active
	I0719 15:47:59.768804   58376 main.go:141] libmachine: (embed-certs-817144) Getting domain xml...
	I0719 15:47:59.769396   58376 main.go:141] libmachine: (embed-certs-817144) Creating domain...
	I0719 15:48:01.024457   58376 main.go:141] libmachine: (embed-certs-817144) Waiting to get IP...
	I0719 15:48:01.025252   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.025697   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.025741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.025660   60153 retry.go:31] will retry after 211.260956ms: waiting for machine to come up
	I0719 15:48:01.238027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.238561   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.238588   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.238529   60153 retry.go:31] will retry after 346.855203ms: waiting for machine to come up
	I0719 15:48:01.587201   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.587773   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.587815   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.587736   60153 retry.go:31] will retry after 327.69901ms: waiting for machine to come up
	I0719 15:48:01.917433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.917899   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.917931   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.917864   60153 retry.go:31] will retry after 474.430535ms: waiting for machine to come up
	I0719 15:48:02.393610   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.394139   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.394168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.394061   60153 retry.go:31] will retry after 491.247455ms: waiting for machine to come up
	I0719 15:48:02.886826   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.887296   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.887329   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.887249   60153 retry.go:31] will retry after 661.619586ms: waiting for machine to come up
	I0719 15:48:03.550633   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:03.551175   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:03.551199   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:03.551126   60153 retry.go:31] will retry after 1.10096194s: waiting for machine to come up
	I0719 15:48:00.583866   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:02.585144   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.112520   59208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301644218s)
	I0719 15:48:05.112555   59208 crio.go:469] duration metric: took 2.301774418s to extract the tarball
	I0719 15:48:05.112565   59208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:05.151199   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:05.193673   59208 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:05.193701   59208 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:05.193712   59208 kubeadm.go:934] updating node { 192.168.61.144 8444 v1.30.3 crio true true} ...
	I0719 15:48:05.193836   59208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-601445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:05.193919   59208 ssh_runner.go:195] Run: crio config
	I0719 15:48:05.239103   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:05.239131   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:05.239146   59208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:05.239176   59208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-601445 NodeName:default-k8s-diff-port-601445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:05.239374   59208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-601445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:05.239441   59208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:05.249729   59208 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:05.249799   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:05.259540   59208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 15:48:05.277388   59208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:05.294497   59208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 15:48:05.313990   59208 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:05.318959   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:05.332278   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:05.463771   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:05.480474   59208 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445 for IP: 192.168.61.144
	I0719 15:48:05.480499   59208 certs.go:194] generating shared ca certs ...
	I0719 15:48:05.480520   59208 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:05.480674   59208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:05.480732   59208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:05.480746   59208 certs.go:256] generating profile certs ...
	I0719 15:48:05.480859   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.key
	I0719 15:48:05.480937   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key.e31ea710
	I0719 15:48:05.480992   59208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key
	I0719 15:48:05.481128   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:05.481165   59208 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:05.481180   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:05.481210   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:05.481245   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:05.481276   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:05.481334   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:05.481940   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:05.524604   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:05.562766   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:05.618041   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:05.660224   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 15:48:05.689232   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:05.713890   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:05.738923   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:05.764447   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:05.793905   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:05.823630   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:05.849454   59208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:05.868309   59208 ssh_runner.go:195] Run: openssl version
	I0719 15:48:05.874423   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:05.887310   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.891994   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.892057   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.898173   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:05.911541   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:05.922829   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927537   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927600   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.933642   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:05.946269   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:05.958798   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963899   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963959   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.969801   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:05.980966   59208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:05.985487   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:05.991303   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:05.997143   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:06.003222   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:06.008984   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:06.014939   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:06.020976   59208 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:06.021059   59208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:06.021106   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.066439   59208 cri.go:89] found id: ""
	I0719 15:48:06.066503   59208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:06.080640   59208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:06.080663   59208 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:06.080730   59208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:06.093477   59208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:06.094740   59208 kubeconfig.go:125] found "default-k8s-diff-port-601445" server: "https://192.168.61.144:8444"
	I0719 15:48:06.096907   59208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:06.107974   59208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.144
	I0719 15:48:06.108021   59208 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:06.108035   59208 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:06.108109   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.156149   59208 cri.go:89] found id: ""
	I0719 15:48:06.156222   59208 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:06.172431   59208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:06.182482   59208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:06.182511   59208 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:06.182562   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 15:48:06.192288   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:06.192361   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:06.202613   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 15:48:06.212553   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:06.212624   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:06.223086   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.233949   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:06.234007   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.247224   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 15:48:06.257851   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:06.257908   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:06.268650   59208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:06.279549   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:06.421964   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.407768   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.614213   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.686560   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.769476   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:07.769590   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.270472   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.770366   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.795057   59208 api_server.go:72] duration metric: took 1.025580277s to wait for apiserver process to appear ...
	I0719 15:48:08.795086   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:08.795112   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:08.795617   59208 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0719 15:48:09.295459   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:04.653309   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:04.653784   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:04.653846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:04.653753   60153 retry.go:31] will retry after 1.276153596s: waiting for machine to come up
	I0719 15:48:05.931365   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:05.931820   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:05.931848   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:05.931798   60153 retry.go:31] will retry after 1.372328403s: waiting for machine to come up
	I0719 15:48:07.305390   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:07.305892   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:07.305922   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:07.305850   60153 retry.go:31] will retry after 1.738311105s: waiting for machine to come up
	I0719 15:48:09.046095   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:09.046526   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:09.046558   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:09.046481   60153 retry.go:31] will retry after 2.169449629s: waiting for machine to come up
	I0719 15:48:05.084157   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:07.583246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:09.584584   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:11.457584   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.457651   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.457670   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.490130   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.490165   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.795439   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.803724   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:11.803757   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.295287   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.300002   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:12.300034   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.795285   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.800067   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:48:12.808020   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:12.808045   59208 api_server.go:131] duration metric: took 4.012952016s to wait for apiserver health ...
	I0719 15:48:12.808055   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:12.808064   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:12.810134   59208 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.812011   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:12.824520   59208 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:12.846711   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:12.855286   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:12.855315   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:12.855322   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:12.855329   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:12.855335   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:12.855345   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:12.855353   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:12.855360   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:12.855369   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:12.855377   59208 system_pods.go:74] duration metric: took 8.645314ms to wait for pod list to return data ...
	I0719 15:48:12.855390   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:12.858531   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:12.858556   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:12.858566   59208 node_conditions.go:105] duration metric: took 3.171526ms to run NodePressure ...
	I0719 15:48:12.858581   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:13.176014   59208 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180575   59208 kubeadm.go:739] kubelet initialised
	I0719 15:48:13.180602   59208 kubeadm.go:740] duration metric: took 4.561708ms waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180612   59208 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:13.187723   59208 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.204023   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204052   59208 pod_ready.go:81] duration metric: took 16.303152ms for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.204061   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204070   59208 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.212768   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212790   59208 pod_ready.go:81] duration metric: took 8.709912ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.212800   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212812   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.220452   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220474   59208 pod_ready.go:81] duration metric: took 7.650656ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.220482   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220489   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.251973   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.251997   59208 pod_ready.go:81] duration metric: took 31.499608ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.252008   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.252029   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.650914   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650940   59208 pod_ready.go:81] duration metric: took 398.904724ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.650948   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650954   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.050582   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050615   59208 pod_ready.go:81] duration metric: took 399.652069ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.050630   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050642   59208 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.450349   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450379   59208 pod_ready.go:81] duration metric: took 399.72875ms for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.450391   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450399   59208 pod_ready.go:38] duration metric: took 1.269776818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:14.450416   59208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:14.462296   59208 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:14.462318   59208 kubeadm.go:597] duration metric: took 8.38163922s to restartPrimaryControlPlane
	I0719 15:48:14.462329   59208 kubeadm.go:394] duration metric: took 8.441360513s to StartCluster
	I0719 15:48:14.462348   59208 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.462422   59208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:14.464082   59208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.464400   59208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:14.464459   59208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:14.464531   59208 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464570   59208 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.464581   59208 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:14.464592   59208 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464610   59208 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464636   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:14.464670   59208 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:14.464672   59208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-601445"
	W0719 15:48:14.464684   59208 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:14.464613   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.464740   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.465050   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465111   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465151   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465178   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465235   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.466230   59208 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:11.217150   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:11.217605   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:11.217634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:11.217561   60153 retry.go:31] will retry after 3.406637692s: waiting for machine to come up
	I0719 15:48:14.467899   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:14.481294   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0719 15:48:14.481538   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0719 15:48:14.481541   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0719 15:48:14.481658   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.482122   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482145   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482363   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482387   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482461   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482478   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482590   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482704   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482762   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482853   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.483131   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483159   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.483199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483217   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.486437   59208 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.486462   59208 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:14.486492   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.486893   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.486932   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.498388   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0719 15:48:14.498897   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0719 15:48:14.498952   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499251   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499660   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499678   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.499838   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499853   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.500068   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500168   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500232   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.500410   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.501505   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0719 15:48:14.501876   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.502391   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.502413   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.502456   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.502745   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.503006   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.503314   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.503341   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.505162   59208 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:14.505166   59208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:12.084791   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.582986   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.506465   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:14.506487   59208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:14.506506   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.506585   59208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.506604   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:14.506628   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.510227   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511092   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511134   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511207   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511231   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511257   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511370   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511390   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511570   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511574   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511662   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.511713   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511787   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511840   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.520612   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0719 15:48:14.521013   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.521451   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.521470   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.521817   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.522016   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.523622   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.523862   59208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.523876   59208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:14.523895   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.526426   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.526882   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.526941   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.527060   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.527190   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.527344   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.527439   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.674585   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:14.693700   59208 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:14.752990   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.856330   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:14.856350   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:14.884762   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:14.884784   59208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:14.895548   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.915815   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:14.915844   59208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:14.979442   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:15.098490   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098517   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098869   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.098893   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.098902   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.099141   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.099158   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.105078   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.105252   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.105506   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.105526   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.802868   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.802892   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803265   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803279   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.803285   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.803517   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803530   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803577   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.905945   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.905972   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906244   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906266   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906266   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.906275   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.906283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906484   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906496   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906511   59208 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:15.908671   59208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.910057   59208 addons.go:510] duration metric: took 1.445597408s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 15:48:16.697266   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:18.698379   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.627319   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:14.627800   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:14.627822   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:14.627767   60153 retry.go:31] will retry after 4.38444645s: waiting for machine to come up
	I0719 15:48:19.016073   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016711   58376 main.go:141] libmachine: (embed-certs-817144) Found IP for machine: 192.168.72.37
	I0719 15:48:19.016742   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has current primary IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016749   58376 main.go:141] libmachine: (embed-certs-817144) Reserving static IP address...
	I0719 15:48:19.017180   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.017204   58376 main.go:141] libmachine: (embed-certs-817144) Reserved static IP address: 192.168.72.37
	I0719 15:48:19.017222   58376 main.go:141] libmachine: (embed-certs-817144) DBG | skip adding static IP to network mk-embed-certs-817144 - found existing host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"}
	I0719 15:48:19.017239   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Getting to WaitForSSH function...
	I0719 15:48:19.017254   58376 main.go:141] libmachine: (embed-certs-817144) Waiting for SSH to be available...
	I0719 15:48:19.019511   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.019867   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.019896   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.020064   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH client type: external
	I0719 15:48:19.020080   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa (-rw-------)
	I0719 15:48:19.020107   58376 main.go:141] libmachine: (embed-certs-817144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:48:19.020115   58376 main.go:141] libmachine: (embed-certs-817144) DBG | About to run SSH command:
	I0719 15:48:19.020124   58376 main.go:141] libmachine: (embed-certs-817144) DBG | exit 0
	I0719 15:48:19.150328   58376 main.go:141] libmachine: (embed-certs-817144) DBG | SSH cmd err, output: <nil>: 
	I0719 15:48:19.150676   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetConfigRaw
	I0719 15:48:19.151317   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.154087   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154600   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.154634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154907   58376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:48:19.155143   58376 machine.go:94] provisionDockerMachine start ...
	I0719 15:48:19.155168   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:19.155369   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.157741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.158060   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158175   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.158368   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158618   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158769   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.158945   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.159144   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.159161   58376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:48:19.274836   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:48:19.274863   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275148   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:48:19.275174   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275373   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.278103   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278489   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.278518   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.278892   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279111   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279299   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.279577   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.279798   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.279815   58376 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-817144 && echo "embed-certs-817144" | sudo tee /etc/hostname
	I0719 15:48:19.413956   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-817144
	
	I0719 15:48:19.413988   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.416836   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.417196   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417408   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.417599   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417777   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417911   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.418083   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.418274   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.418290   58376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-817144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-817144/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-817144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:48:16.583538   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.083431   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.541400   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:48:19.541439   58376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:48:19.541464   58376 buildroot.go:174] setting up certificates
	I0719 15:48:19.541478   58376 provision.go:84] configureAuth start
	I0719 15:48:19.541495   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.541801   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.544209   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544579   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.544608   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544766   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.547206   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.547570   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547714   58376 provision.go:143] copyHostCerts
	I0719 15:48:19.547772   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:48:19.547782   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:48:19.547827   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:48:19.547939   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:48:19.547949   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:48:19.547969   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:48:19.548024   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:48:19.548031   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:48:19.548047   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:48:19.548093   58376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.embed-certs-817144 san=[127.0.0.1 192.168.72.37 embed-certs-817144 localhost minikube]
	I0719 15:48:20.024082   58376 provision.go:177] copyRemoteCerts
	I0719 15:48:20.024137   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:48:20.024157   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.026940   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027322   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.027358   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027541   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.027819   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.028011   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.028165   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.117563   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:48:20.144428   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:48:20.171520   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:48:20.195188   58376 provision.go:87] duration metric: took 653.6924ms to configureAuth
	I0719 15:48:20.195215   58376 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:48:20.195432   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:20.195518   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.198648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.198970   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.199007   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.199126   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.199335   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199527   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199687   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.199849   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.200046   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.200063   58376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:48:20.502753   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:48:20.502782   58376 machine.go:97] duration metric: took 1.347623735s to provisionDockerMachine
	I0719 15:48:20.502794   58376 start.go:293] postStartSetup for "embed-certs-817144" (driver="kvm2")
	I0719 15:48:20.502805   58376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:48:20.502821   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.503204   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:48:20.503248   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.506142   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.506563   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506697   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.506938   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.507125   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.507258   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.593356   58376 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:48:20.597843   58376 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:48:20.597877   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:48:20.597948   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:48:20.598048   58376 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:48:20.598164   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:48:20.607951   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:20.634860   58376 start.go:296] duration metric: took 132.043928ms for postStartSetup
	I0719 15:48:20.634900   58376 fix.go:56] duration metric: took 20.891722874s for fixHost
	I0719 15:48:20.634919   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.637846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638181   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.638218   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638439   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.638674   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.638884   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.639054   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.639256   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.639432   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.639444   58376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:48:20.755076   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404100.730818472
	
	I0719 15:48:20.755107   58376 fix.go:216] guest clock: 1721404100.730818472
	I0719 15:48:20.755115   58376 fix.go:229] Guest: 2024-07-19 15:48:20.730818472 +0000 UTC Remote: 2024-07-19 15:48:20.634903926 +0000 UTC m=+356.193225446 (delta=95.914546ms)
	I0719 15:48:20.755134   58376 fix.go:200] guest clock delta is within tolerance: 95.914546ms
	I0719 15:48:20.755139   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 21.011996674s
	I0719 15:48:20.755171   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.755465   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:20.758255   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758621   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.758644   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758861   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759348   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759545   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759656   58376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:48:20.759720   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.759780   58376 ssh_runner.go:195] Run: cat /version.json
	I0719 15:48:20.759802   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.762704   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.762833   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763161   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763202   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763399   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763493   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763545   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763608   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763693   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763772   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764001   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763996   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.764156   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764278   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.867430   58376 ssh_runner.go:195] Run: systemctl --version
	I0719 15:48:20.873463   58376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:21.029369   58376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:21.035953   58376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:21.036028   58376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:21.054352   58376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:21.054381   58376 start.go:495] detecting cgroup driver to use...
	I0719 15:48:21.054440   58376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:21.071903   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:21.088624   58376 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:21.088688   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:21.104322   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:21.120089   58376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:21.242310   58376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:21.422514   58376 docker.go:233] disabling docker service ...
	I0719 15:48:21.422589   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:21.439213   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:21.454361   58376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:21.577118   58376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:21.704150   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:21.719160   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:21.738765   58376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:21.738817   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.750720   58376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:21.750798   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.763190   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.775630   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.787727   58376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:21.799520   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.812016   58376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.830564   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.841770   58376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:21.851579   58376 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:21.851651   58376 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:21.864529   58376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:21.874301   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:21.994669   58376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:22.131448   58376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:22.131521   58376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:22.137328   58376 start.go:563] Will wait 60s for crictl version
	I0719 15:48:22.137391   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:48:22.141409   58376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:22.182947   58376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:22.183029   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.217804   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.252450   58376 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.197350   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:22.197536   59208 node_ready.go:49] node "default-k8s-diff-port-601445" has status "Ready":"True"
	I0719 15:48:22.197558   59208 node_ready.go:38] duration metric: took 7.503825721s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:22.197568   59208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:22.203380   59208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:24.211899   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:22.253862   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:22.256397   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256763   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:22.256791   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256968   58376 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:22.261184   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:22.274804   58376 kubeadm.go:883] updating cluster {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:22.274936   58376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:22.274994   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:22.317501   58376 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:22.317559   58376 ssh_runner.go:195] Run: which lz4
	I0719 15:48:22.321646   58376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:22.326455   58376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:22.326478   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:23.820083   58376 crio.go:462] duration metric: took 1.498469232s to copy over tarball
	I0719 15:48:23.820155   58376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:48:21.583230   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.585191   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.710838   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.786269   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:26.105248   58376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285062307s)
	I0719 15:48:26.105271   58376 crio.go:469] duration metric: took 2.285164513s to extract the tarball
	I0719 15:48:26.105279   58376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:26.142811   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:26.185631   58376 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:26.185660   58376 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:26.185668   58376 kubeadm.go:934] updating node { 192.168.72.37 8443 v1.30.3 crio true true} ...
	I0719 15:48:26.185784   58376 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-817144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:26.185857   58376 ssh_runner.go:195] Run: crio config
	I0719 15:48:26.238150   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:26.238172   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:26.238183   58376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:26.238211   58376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-817144 NodeName:embed-certs-817144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:26.238449   58376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-817144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:26.238515   58376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:26.249200   58376 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:26.249278   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:26.258710   58376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 15:48:26.279235   58376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:26.299469   58376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 15:48:26.317789   58376 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:26.321564   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:26.333153   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:26.452270   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:26.469344   58376 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144 for IP: 192.168.72.37
	I0719 15:48:26.469366   58376 certs.go:194] generating shared ca certs ...
	I0719 15:48:26.469382   58376 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:26.469530   58376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:26.469586   58376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:26.469601   58376 certs.go:256] generating profile certs ...
	I0719 15:48:26.469694   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/client.key
	I0719 15:48:26.469791   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key.928d4c24
	I0719 15:48:26.469846   58376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key
	I0719 15:48:26.469982   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:26.470021   58376 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:26.470035   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:26.470071   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:26.470105   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:26.470140   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:26.470197   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:26.470812   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:26.508455   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:26.537333   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:26.565167   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:26.601152   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 15:48:26.636408   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:26.669076   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:26.695438   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:26.718897   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:26.741760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:26.764760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:26.787772   58376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:26.807332   58376 ssh_runner.go:195] Run: openssl version
	I0719 15:48:26.815182   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:26.827373   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831926   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831973   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.837923   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:26.849158   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:26.860466   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865178   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865249   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.870873   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:26.882044   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:26.893283   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897750   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897809   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.903395   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:26.914389   58376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:26.918904   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:26.924659   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:26.930521   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:26.936808   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:26.942548   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:26.948139   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:26.954557   58376 kubeadm.go:392] StartCluster: {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:26.954644   58376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:26.954722   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:26.994129   58376 cri.go:89] found id: ""
	I0719 15:48:26.994205   58376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:27.006601   58376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:27.006624   58376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:27.006699   58376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:27.017166   58376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:27.018580   58376 kubeconfig.go:125] found "embed-certs-817144" server: "https://192.168.72.37:8443"
	I0719 15:48:27.021622   58376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:27.033000   58376 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.37
	I0719 15:48:27.033033   58376 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:27.033044   58376 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:27.033083   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:27.073611   58376 cri.go:89] found id: ""
	I0719 15:48:27.073678   58376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:27.092986   58376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:27.103557   58376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:27.103580   58376 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:27.103636   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:48:27.113687   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:27.113752   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:27.123696   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:48:27.132928   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:27.132984   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:27.142566   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.152286   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:27.152335   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.161701   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:48:27.171532   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:27.171591   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:27.181229   58376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:27.192232   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:27.330656   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.287561   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.513476   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.616308   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.704518   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:28.704605   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.205265   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.082992   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.746255   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.704706   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.204728   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.221741   58376 api_server.go:72] duration metric: took 1.517220815s to wait for apiserver process to appear ...
	I0719 15:48:30.221766   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:30.221786   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.665104   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.665138   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.665152   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.703238   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.703271   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.722495   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.748303   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:32.748344   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.222861   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.227076   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.227104   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.722705   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.734658   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.734683   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:34.222279   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:34.227870   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:48:34.233621   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:34.233646   58376 api_server.go:131] duration metric: took 4.011873202s to wait for apiserver health ...
	I0719 15:48:34.233656   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:34.233664   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:34.235220   58376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:30.210533   59208 pod_ready.go:92] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.210557   59208 pod_ready.go:81] duration metric: took 8.007151724s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.210568   59208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215669   59208 pod_ready.go:92] pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.215692   59208 pod_ready.go:81] duration metric: took 5.116005ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215702   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222633   59208 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.222655   59208 pod_ready.go:81] duration metric: took 6.947228ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222664   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227631   59208 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.227656   59208 pod_ready.go:81] duration metric: took 4.985227ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227667   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405047   59208 pod_ready.go:92] pod "kube-proxy-r7b2z" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.405073   59208 pod_ready.go:81] duration metric: took 177.397954ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405085   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805843   59208 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.805877   59208 pod_ready.go:81] duration metric: took 400.783803ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805890   59208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:32.821231   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.236303   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:34.248133   58376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:34.270683   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:34.279907   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:34.279939   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:34.279946   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:34.279953   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:34.279960   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:34.279966   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:34.279972   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:34.279977   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:34.279982   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:34.279988   58376 system_pods.go:74] duration metric: took 9.282886ms to wait for pod list to return data ...
	I0719 15:48:34.279995   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:34.283597   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:34.283623   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:34.283634   58376 node_conditions.go:105] duration metric: took 3.634999ms to run NodePressure ...
	I0719 15:48:34.283649   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:31.082803   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:33.583510   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.586116   58376 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590095   58376 kubeadm.go:739] kubelet initialised
	I0719 15:48:34.590119   58376 kubeadm.go:740] duration metric: took 3.977479ms waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590128   58376 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:34.594987   58376 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.600192   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600212   58376 pod_ready.go:81] duration metric: took 5.205124ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.600220   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600225   58376 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.603934   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603952   58376 pod_ready.go:81] duration metric: took 3.719853ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.603959   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603965   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.607778   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607803   58376 pod_ready.go:81] duration metric: took 3.830174ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.607817   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607826   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.673753   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673775   58376 pod_ready.go:81] duration metric: took 65.937586ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.673783   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673788   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.075506   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075539   58376 pod_ready.go:81] duration metric: took 401.743578ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.075548   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075554   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.474518   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474546   58376 pod_ready.go:81] duration metric: took 398.985628ms for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.474558   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474567   58376 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.874540   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874567   58376 pod_ready.go:81] duration metric: took 399.989978ms for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.874576   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874582   58376 pod_ready.go:38] duration metric: took 1.284443879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:35.874646   58376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:35.886727   58376 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:35.886751   58376 kubeadm.go:597] duration metric: took 8.880120513s to restartPrimaryControlPlane
	I0719 15:48:35.886760   58376 kubeadm.go:394] duration metric: took 8.932210528s to StartCluster
	I0719 15:48:35.886781   58376 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.886859   58376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:35.888389   58376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.888642   58376 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:35.888722   58376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:35.888781   58376 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-817144"
	I0719 15:48:35.888810   58376 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-817144"
	I0719 15:48:35.888824   58376 addons.go:69] Setting default-storageclass=true in profile "embed-certs-817144"
	I0719 15:48:35.888839   58376 addons.go:69] Setting metrics-server=true in profile "embed-certs-817144"
	I0719 15:48:35.888875   58376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-817144"
	I0719 15:48:35.888888   58376 addons.go:234] Setting addon metrics-server=true in "embed-certs-817144"
	W0719 15:48:35.888897   58376 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:35.888931   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.888840   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0719 15:48:35.888843   58376 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:35.889000   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.889231   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889242   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889247   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889270   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889272   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889282   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.890641   58376 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:35.892144   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:35.905134   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0719 15:48:35.905572   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.905788   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0719 15:48:35.906107   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906132   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.906171   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.906496   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.906825   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906846   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.907126   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.907179   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.907215   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.907289   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.908269   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0719 15:48:35.908747   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.909343   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.909367   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.909787   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.910337   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910382   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.910615   58376 addons.go:234] Setting addon default-storageclass=true in "embed-certs-817144"
	W0719 15:48:35.910632   58376 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:35.910662   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.910937   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910965   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.926165   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 15:48:35.926905   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.926944   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0719 15:48:35.927369   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.927573   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927636   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927829   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927847   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927959   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928512   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.928551   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.928759   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928824   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 15:48:35.928964   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.929176   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.929546   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.929557   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.929927   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.930278   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.931161   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.931773   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.933234   58376 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:35.933298   58376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:35.934543   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:35.934556   58376 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:35.934569   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.934629   58376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:35.934642   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:35.934657   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.938300   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938628   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.938648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938679   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939150   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939340   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.939433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.939479   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939536   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.939619   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939673   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.939937   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.940081   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.940190   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.947955   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0719 15:48:35.948206   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.948643   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.948654   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.948961   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.949119   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.950572   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.951770   58376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:35.951779   58376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:35.951791   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.957009   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957381   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.957405   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957550   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.957717   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.957841   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.957953   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:36.072337   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:36.091547   58376 node_ready.go:35] waiting up to 6m0s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:36.182328   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:36.195704   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:36.195729   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:36.221099   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:36.224606   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:36.224632   58376 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:36.247264   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:36.247289   58376 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:36.300365   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:37.231670   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010526005s)
	I0719 15:48:37.231729   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231743   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.231765   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049406285s)
	I0719 15:48:37.231807   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231822   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232034   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232085   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232096   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.232100   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232105   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.232115   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232345   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232366   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233486   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233529   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233541   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.233549   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.233792   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233815   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233832   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.240487   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.240502   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.240732   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.240754   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.240755   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288064   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288085   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288370   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288389   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288378   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288400   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288406   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288595   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288606   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288652   58376 addons.go:475] Verifying addon metrics-server=true in "embed-certs-817144"
	I0719 15:48:37.290497   58376 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.314792   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.814653   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.291961   58376 addons.go:510] duration metric: took 1.403238435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:48:38.096793   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.584345   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.585215   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.818959   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.313745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:44.314213   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:40.596246   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.095976   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.595640   58376 node_ready.go:49] node "embed-certs-817144" has status "Ready":"True"
	I0719 15:48:43.595659   58376 node_ready.go:38] duration metric: took 7.504089345s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:43.595667   58376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:43.600832   58376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605878   58376 pod_ready.go:92] pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.605900   58376 pod_ready.go:81] duration metric: took 5.046391ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605912   58376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610759   58376 pod_ready.go:92] pod "etcd-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.610778   58376 pod_ready.go:81] duration metric: took 4.85915ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610788   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615239   58376 pod_ready.go:92] pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.615257   58376 pod_ready.go:81] duration metric: took 4.46126ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615267   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619789   58376 pod_ready.go:92] pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.619804   58376 pod_ready.go:81] duration metric: took 4.530085ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619814   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998585   58376 pod_ready.go:92] pod "kube-proxy-4d4g9" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.998612   58376 pod_ready.go:81] duration metric: took 378.78761ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998622   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:40.084033   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.582983   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.812904   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.313178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:46.004415   58376 pod_ready.go:102] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.006304   58376 pod_ready.go:92] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:48.006329   58376 pod_ready.go:81] duration metric: took 4.00769937s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:48.006339   58376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:45.082973   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:47.582224   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.582782   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:51.814049   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:53.815503   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:50.015637   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:52.515491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:51.583726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.083179   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.816000   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.817771   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:55.014213   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.014730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:56.083381   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.088572   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:00.313552   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:02.812079   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:59.513087   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:01.514094   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.013514   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:00.583159   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:03.082968   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:05.312525   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.812891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:06.013654   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:08.015552   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:05.083931   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.583371   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:09.824389   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.312960   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.512671   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.513359   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.082891   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:14.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:14.813090   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.311701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:15.014386   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.513993   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:16.584566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.082569   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:19.814129   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.814762   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:23.817102   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:20.012767   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:22.512467   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.587074   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:24.082829   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:26.312496   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.312687   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.015437   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:27.514515   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:26.084854   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.584103   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:30.313153   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:32.812075   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:29.514963   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.515163   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.014174   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.083793   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:33.083838   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:34.812542   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.311929   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.312244   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:36.513892   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.013261   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:35.084098   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.587696   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:41.313207   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.815916   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:41.013495   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.513445   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:40.082726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:42.583599   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.584503   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:46.313534   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.811536   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:46.012299   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.515396   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:47.082848   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:49.083291   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.813781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:52.817124   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.516602   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.012716   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:51.083390   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.583030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:55.312032   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.813778   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:55.013719   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.014070   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:56.083506   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:58.582593   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:59.815894   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.312541   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.513158   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.013500   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:00.583268   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:03.082967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:04.814326   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.314104   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:04.513144   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.013900   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.014269   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.582967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.583076   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.583550   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:09.813831   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.815120   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.815551   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.512872   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.514351   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.584717   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.082745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.815701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:17.816052   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.012834   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.014504   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.582156   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.583011   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:20.312912   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:22.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:20.513572   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.014103   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:21.082689   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.583483   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:25.312127   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.312599   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.512955   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.515102   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.583597   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:28.083843   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:29.815683   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.312009   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.312309   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.013332   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.013381   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.082937   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:36.812745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.312184   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.513321   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:36.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.012035   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:35.084310   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:37.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:41.313263   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.816257   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:41.014458   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.017012   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:40.083591   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:42.582246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:44.582857   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:46.312320   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.312805   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.512849   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.013822   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:46.582906   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.583537   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:50.815488   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.312626   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:50.013996   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:52.514493   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:51.082358   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.582566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:50:55.814460   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.313739   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:55.014039   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:57.513248   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:56.082876   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.583172   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:00.812445   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.813629   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.011751   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.013062   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.013473   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.584028   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:03.082149   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:05.312865   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.816945   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:06.513634   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.012283   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:05.084185   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.583429   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.583944   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:10.315941   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:12.812732   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.013749   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.513338   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.584335   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:14.083745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:15.311404   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:17.312317   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.013193   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:18.014317   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.583403   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.082807   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.812659   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.813178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.311781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:20.512610   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:22.512707   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.083030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:23.583501   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:26.312416   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.313406   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.513171   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:27.012377   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:29.014890   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.583785   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.083633   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:30.811822   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.813013   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:31.512155   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.012636   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:30.083916   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.582845   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.582945   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.313638   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:37.813400   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.013415   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.513387   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.583140   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:39.084770   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:40.312909   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:42.812703   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.011956   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:43.513117   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.584336   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.082447   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.813328   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:47.318119   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.013597   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.513037   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.083435   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.582222   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:51:49.811847   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:51.812747   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.312028   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.514497   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:53.012564   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.585244   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:52.587963   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.576923   58417 pod_ready.go:81] duration metric: took 4m0.000887015s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	E0719 15:51:54.576954   58417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 15:51:54.576979   58417 pod_ready.go:38] duration metric: took 4m10.045017696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:51:54.577013   58417 kubeadm.go:597] duration metric: took 4m18.572474217s to restartPrimaryControlPlane
	W0719 15:51:54.577075   58417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:54.577107   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:56.314112   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:58.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:55.012915   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:57.512491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:01.312620   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:03.812880   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:59.512666   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:02.013784   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.314545   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:08.811891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:04.512583   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:09.016808   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:10.813197   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:13.313167   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:11.513329   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:14.012352   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:15.812105   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:17.812843   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:16.014362   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:18.513873   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:20.685347   58417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.108209289s)
	I0719 15:52:20.685431   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:20.699962   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:20.709728   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:20.719022   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:20.719038   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:52:20.719074   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:52:20.727669   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:52:20.727731   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:52:20.736851   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:52:20.745821   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:52:20.745867   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:52:20.755440   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.764307   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:52:20.764360   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.773759   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:52:20.782354   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:52:20.782420   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:52:20.791186   58417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:20.837700   58417 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 15:52:20.837797   58417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:52:20.958336   58417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:20.958486   58417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:20.958629   58417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 15:52:20.967904   58417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:20.969995   58417 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:20.970097   58417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:52:20.970197   58417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:20.970325   58417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:52:20.970438   58417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:52:20.970550   58417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:52:20.970633   58417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:52:20.970740   58417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:52:20.970840   58417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:52:20.970949   58417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:52:20.971049   58417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:52:20.971106   58417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:52:20.971184   58417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:21.175226   58417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:21.355994   58417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 15:52:21.453237   58417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:21.569014   58417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:21.672565   58417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:21.673036   58417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:21.675860   58417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:20.312428   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:22.312770   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:24.314183   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.013099   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:23.512341   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.677594   58417 out.go:204]   - Booting up control plane ...
	I0719 15:52:21.677694   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:21.677787   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:21.677894   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:21.695474   58417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:21.701352   58417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:21.701419   58417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:52:21.831941   58417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 15:52:21.832046   58417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 15:52:22.333073   58417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.399393ms
	I0719 15:52:22.333184   58417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 15:52:27.336964   58417 kubeadm.go:310] [api-check] The API server is healthy after 5.002306078s
	I0719 15:52:27.348152   58417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:27.366916   58417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:27.396214   58417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:27.396475   58417 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-382231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:27.408607   58417 kubeadm.go:310] [bootstrap-token] Using token: xdoy2n.29347ekmgral9ki3
	I0719 15:52:27.409857   58417 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:27.409991   58417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:27.415553   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:27.424772   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:27.428421   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:27.439922   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:27.443985   58417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:27.742805   58417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:28.253742   58417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 15:52:28.744380   58417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 15:52:28.744405   58417 kubeadm.go:310] 
	I0719 15:52:28.744486   58417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:28.744498   58417 kubeadm.go:310] 
	I0719 15:52:28.744581   58417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:28.744588   58417 kubeadm.go:310] 
	I0719 15:52:28.744633   58417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 15:52:28.744704   58417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:28.744783   58417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:28.744794   58417 kubeadm.go:310] 
	I0719 15:52:28.744877   58417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 15:52:28.744891   58417 kubeadm.go:310] 
	I0719 15:52:28.744944   58417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:28.744951   58417 kubeadm.go:310] 
	I0719 15:52:28.744992   58417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 15:52:28.745082   58417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:28.745172   58417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:28.745181   58417 kubeadm.go:310] 
	I0719 15:52:28.745253   58417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:28.745319   58417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 15:52:28.745332   58417 kubeadm.go:310] 
	I0719 15:52:28.745412   58417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745499   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 15:52:28.745518   58417 kubeadm.go:310] 	--control-plane 
	I0719 15:52:28.745525   58417 kubeadm.go:310] 
	I0719 15:52:28.745599   58417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:28.745609   58417 kubeadm.go:310] 
	I0719 15:52:28.745677   58417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745778   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 15:52:28.747435   58417 kubeadm.go:310] W0719 15:52:20.814208    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747697   58417 kubeadm.go:310] W0719 15:52:20.814905    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747795   58417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:28.747815   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:52:28.747827   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:52:28.749619   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:26.813409   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.814040   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:25.513048   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:27.514730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.750992   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:28.762976   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:52:28.783894   58417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:28.783972   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.783989   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-382231 minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=no-preload-382231 minikube.k8s.io/primary=true
	I0719 15:52:28.808368   58417 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:29.005658   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.505702   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.005765   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.505834   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.005837   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.506329   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.006419   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.505701   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.005735   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.130121   58417 kubeadm.go:1113] duration metric: took 4.346215264s to wait for elevateKubeSystemPrivileges
	I0719 15:52:33.130162   58417 kubeadm.go:394] duration metric: took 4m57.173876302s to StartCluster
	I0719 15:52:33.130187   58417 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.130290   58417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:52:33.131944   58417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.132178   58417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:52:33.132237   58417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:52:33.132339   58417 addons.go:69] Setting storage-provisioner=true in profile "no-preload-382231"
	I0719 15:52:33.132358   58417 addons.go:69] Setting default-storageclass=true in profile "no-preload-382231"
	I0719 15:52:33.132381   58417 addons.go:234] Setting addon storage-provisioner=true in "no-preload-382231"
	I0719 15:52:33.132385   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0719 15:52:33.132391   58417 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:52:33.132392   58417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-382231"
	I0719 15:52:33.132419   58417 addons.go:69] Setting metrics-server=true in profile "no-preload-382231"
	I0719 15:52:33.132423   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132444   58417 addons.go:234] Setting addon metrics-server=true in "no-preload-382231"
	W0719 15:52:33.132452   58417 addons.go:243] addon metrics-server should already be in state true
	I0719 15:52:33.132474   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132740   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132763   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132799   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132810   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132822   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132829   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.134856   58417 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:33.136220   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:33.149028   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0719 15:52:33.149128   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0719 15:52:33.149538   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.149646   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.150093   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150108   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150111   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150119   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150477   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150603   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150955   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.150971   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.151326   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 15:52:33.151359   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.151715   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.152199   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.152223   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.152574   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.153136   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.153170   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.155187   58417 addons.go:234] Setting addon default-storageclass=true in "no-preload-382231"
	W0719 15:52:33.155207   58417 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:52:33.155235   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.155572   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.155602   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.170886   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0719 15:52:33.170884   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 15:52:33.171439   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.171510   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0719 15:52:33.171543   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172005   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172026   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172109   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172141   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172162   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172538   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172552   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172609   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172775   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.172831   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172875   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.173021   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.173381   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.173405   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.175118   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.175500   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.177023   58417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:52:33.177041   58417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:33.178348   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:33.178362   58417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:33.178377   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.178450   58417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.178469   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:52:33.178486   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.182287   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182598   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.182617   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182741   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.182948   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.183074   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.183204   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.183372   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183940   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.183959   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183994   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.184237   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.184356   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.184505   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.191628   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 15:52:33.191984   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.192366   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.192385   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.192707   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.192866   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.194285   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.194485   58417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.194499   58417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:33.194514   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.197526   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.197853   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.197872   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.198087   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.198335   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.198472   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.198604   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.382687   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:52:33.403225   58417 node_ready.go:35] waiting up to 6m0s for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430507   58417 node_ready.go:49] node "no-preload-382231" has status "Ready":"True"
	I0719 15:52:33.430535   58417 node_ready.go:38] duration metric: took 27.282654ms for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430546   58417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:33.482352   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.555210   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.565855   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:33.565874   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:52:33.571653   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.609541   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:33.609569   58417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:33.674428   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:33.674455   58417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:33.746703   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:34.092029   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092051   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092341   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092359   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.092369   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092379   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092604   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092628   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.092634   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.093766   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.093785   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094025   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094043   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094076   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.094088   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094325   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094343   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094349   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128393   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.128412   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.128715   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128766   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.128775   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.319737   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.319764   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320141   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320161   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320165   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.320184   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.320199   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320441   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320462   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320475   58417 addons.go:475] Verifying addon metrics-server=true in "no-preload-382231"
	I0719 15:52:34.320482   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.322137   58417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:52:30.812091   59208 pod_ready.go:81] duration metric: took 4m0.006187238s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:30.812113   59208 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:30.812120   59208 pod_ready.go:38] duration metric: took 4m8.614544303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.812135   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:30.812161   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:30.812208   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:30.861054   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:30.861074   59208 cri.go:89] found id: ""
	I0719 15:52:30.861083   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:30.861144   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.865653   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:30.865708   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:30.900435   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:30.900459   59208 cri.go:89] found id: ""
	I0719 15:52:30.900468   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:30.900512   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.904686   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:30.904747   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:30.950618   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.950638   59208 cri.go:89] found id: ""
	I0719 15:52:30.950646   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:30.950691   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.955080   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:30.955147   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:30.996665   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:30.996691   59208 cri.go:89] found id: ""
	I0719 15:52:30.996704   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:30.996778   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.001122   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:31.001191   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:31.042946   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.042969   59208 cri.go:89] found id: ""
	I0719 15:52:31.042979   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:31.043039   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.047311   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:31.047365   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:31.086140   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.086166   59208 cri.go:89] found id: ""
	I0719 15:52:31.086175   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:31.086230   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.091742   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:31.091818   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:31.134209   59208 cri.go:89] found id: ""
	I0719 15:52:31.134241   59208 logs.go:276] 0 containers: []
	W0719 15:52:31.134252   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:31.134260   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:31.134316   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:31.173297   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.173325   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.173331   59208 cri.go:89] found id: ""
	I0719 15:52:31.173353   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:31.173414   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.177951   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.182099   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:31.182121   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:31.196541   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:31.196565   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:31.322528   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:31.322555   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:31.369628   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:31.369658   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:31.417834   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:31.417867   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:31.459116   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:31.459145   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.500986   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:31.501018   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.578557   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:31.578606   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:31.635053   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:31.635082   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:31.692604   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:31.692635   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.729765   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:31.729801   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.766152   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:31.766177   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:32.301240   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:32.301278   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.013083   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:32.013142   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:34.323358   58417 addons.go:510] duration metric: took 1.19112329s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:34.849019   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:34.866751   59208 api_server.go:72] duration metric: took 4m20.402312557s to wait for apiserver process to appear ...
	I0719 15:52:34.866779   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:34.866816   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:34.866876   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:34.905505   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.905532   59208 cri.go:89] found id: ""
	I0719 15:52:34.905542   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:34.905609   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.910996   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:34.911069   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:34.958076   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:34.958100   59208 cri.go:89] found id: ""
	I0719 15:52:34.958110   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:34.958166   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.962439   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:34.962507   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:34.999095   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:34.999117   59208 cri.go:89] found id: ""
	I0719 15:52:34.999126   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:34.999178   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.003785   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:35.003848   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:35.042585   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.042613   59208 cri.go:89] found id: ""
	I0719 15:52:35.042622   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:35.042683   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.048705   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:35.048770   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:35.092408   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.092435   59208 cri.go:89] found id: ""
	I0719 15:52:35.092444   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:35.092499   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.096983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:35.097050   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:35.135694   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.135717   59208 cri.go:89] found id: ""
	I0719 15:52:35.135726   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:35.135782   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.140145   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:35.140223   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:35.178912   59208 cri.go:89] found id: ""
	I0719 15:52:35.178938   59208 logs.go:276] 0 containers: []
	W0719 15:52:35.178948   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:35.178955   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:35.179015   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:35.229067   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.229090   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.229104   59208 cri.go:89] found id: ""
	I0719 15:52:35.229112   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:35.229172   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.234985   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.240098   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:35.240120   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:35.299418   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:35.299449   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:35.316294   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:35.316330   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:35.433573   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:35.433610   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:35.479149   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:35.479181   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:35.526270   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:35.526299   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.564209   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:35.564241   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.601985   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:35.602020   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.669986   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:35.670015   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.711544   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:35.711580   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:35.763800   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:35.763831   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:35.822699   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:35.822732   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.863377   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:35.863422   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:38.777749   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:52:38.781984   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:52:38.782935   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:38.782955   59208 api_server.go:131] duration metric: took 3.916169938s to wait for apiserver health ...
	I0719 15:52:38.782963   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:38.782983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:38.783026   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:38.818364   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:38.818387   59208 cri.go:89] found id: ""
	I0719 15:52:38.818395   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:38.818442   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.823001   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:38.823054   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:38.857871   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:38.857900   59208 cri.go:89] found id: ""
	I0719 15:52:38.857909   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:38.857958   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.864314   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:38.864375   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:38.910404   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:38.910434   59208 cri.go:89] found id: ""
	I0719 15:52:38.910445   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:38.910505   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.915588   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:38.915645   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:38.952981   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:38.953002   59208 cri.go:89] found id: ""
	I0719 15:52:38.953009   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:38.953055   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.957397   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:38.957447   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:39.002973   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.003001   59208 cri.go:89] found id: ""
	I0719 15:52:39.003011   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:39.003059   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.007496   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:39.007568   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:39.045257   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.045282   59208 cri.go:89] found id: ""
	I0719 15:52:39.045291   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:39.045351   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.049358   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:39.049415   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:39.083263   59208 cri.go:89] found id: ""
	I0719 15:52:39.083303   59208 logs.go:276] 0 containers: []
	W0719 15:52:39.083314   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:39.083321   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:39.083391   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:39.121305   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.121348   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.121354   59208 cri.go:89] found id: ""
	I0719 15:52:39.121363   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:39.121421   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.126259   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.130395   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:39.130413   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:39.171213   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:39.171239   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.206545   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:39.206577   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:39.267068   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:39.267105   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:39.373510   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:39.373544   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.512374   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.012559   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:39.013766   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:35.495479   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.989424   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:38.489746   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.489775   58417 pod_ready.go:81] duration metric: took 5.007393051s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.489790   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495855   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.495884   58417 pod_ready.go:81] duration metric: took 6.085398ms for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495895   58417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:40.502651   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:41.503286   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.503309   58417 pod_ready.go:81] duration metric: took 3.007406201s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.503321   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513225   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.513245   58417 pod_ready.go:81] duration metric: took 9.916405ms for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513256   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517651   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.517668   58417 pod_ready.go:81] duration metric: took 4.40518ms for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517677   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522529   58417 pod_ready.go:92] pod "kube-proxy-qd84x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.522544   58417 pod_ready.go:81] duration metric: took 4.861257ms for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522551   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687964   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.687987   58417 pod_ready.go:81] duration metric: took 165.428951ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687997   58417 pod_ready.go:38] duration metric: took 8.257437931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:41.688016   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:41.688069   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:41.705213   58417 api_server.go:72] duration metric: took 8.573000368s to wait for apiserver process to appear ...
	I0719 15:52:41.705236   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:41.705256   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:52:41.709425   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:52:41.710427   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:52:41.710447   58417 api_server.go:131] duration metric: took 5.203308ms to wait for apiserver health ...
	I0719 15:52:41.710455   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:41.890063   58417 system_pods.go:59] 9 kube-system pods found
	I0719 15:52:41.890091   58417 system_pods.go:61] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:41.890095   58417 system_pods.go:61] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:41.890099   58417 system_pods.go:61] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:41.890103   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:41.890106   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:41.890109   58417 system_pods.go:61] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:41.890112   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:41.890117   58417 system_pods.go:61] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:41.890121   58417 system_pods.go:61] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:41.890128   58417 system_pods.go:74] duration metric: took 179.666477ms to wait for pod list to return data ...
	I0719 15:52:41.890135   58417 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.086946   58417 default_sa.go:45] found service account: "default"
	I0719 15:52:42.086973   58417 default_sa.go:55] duration metric: took 196.832888ms for default service account to be created ...
	I0719 15:52:42.086984   58417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.289457   58417 system_pods.go:86] 9 kube-system pods found
	I0719 15:52:42.289483   58417 system_pods.go:89] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:42.289489   58417 system_pods.go:89] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:42.289493   58417 system_pods.go:89] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:42.289498   58417 system_pods.go:89] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:42.289502   58417 system_pods.go:89] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:42.289506   58417 system_pods.go:89] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:42.289510   58417 system_pods.go:89] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:42.289518   58417 system_pods.go:89] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.289523   58417 system_pods.go:89] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:42.289530   58417 system_pods.go:126] duration metric: took 202.54151ms to wait for k8s-apps to be running ...
	I0719 15:52:42.289536   58417 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.289575   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.304866   58417 system_svc.go:56] duration metric: took 15.319153ms WaitForService to wait for kubelet
	I0719 15:52:42.304931   58417 kubeadm.go:582] duration metric: took 9.172718104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.304958   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.488087   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.488108   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.488122   58417 node_conditions.go:105] duration metric: took 183.159221ms to run NodePressure ...
	I0719 15:52:42.488135   58417 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.488144   58417 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.488157   58417 start.go:255] writing updated cluster config ...
	I0719 15:52:42.488453   58417 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.536465   58417 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:52:42.538606   58417 out.go:177] * Done! kubectl is now configured to use "no-preload-382231" cluster and "default" namespace by default
	I0719 15:52:39.422000   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:39.422034   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:39.473826   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:39.473860   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:39.515998   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:39.516023   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:39.559475   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:39.559506   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:39.574174   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:39.574205   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.615906   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:39.615933   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.676764   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:39.676795   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.714437   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:39.714467   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:42.584088   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:42.584114   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.584119   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.584123   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.584127   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.584130   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.584133   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.584138   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.584143   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.584150   59208 system_pods.go:74] duration metric: took 3.801182741s to wait for pod list to return data ...
	I0719 15:52:42.584156   59208 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.586910   59208 default_sa.go:45] found service account: "default"
	I0719 15:52:42.586934   59208 default_sa.go:55] duration metric: took 2.771722ms for default service account to be created ...
	I0719 15:52:42.586943   59208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.593611   59208 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:42.593634   59208 system_pods.go:89] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.593639   59208 system_pods.go:89] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.593645   59208 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.593650   59208 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.593654   59208 system_pods.go:89] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.593658   59208 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.593669   59208 system_pods.go:89] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.593673   59208 system_pods.go:89] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.593680   59208 system_pods.go:126] duration metric: took 6.731347ms to wait for k8s-apps to be running ...
	I0719 15:52:42.593687   59208 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.593726   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.615811   59208 system_svc.go:56] duration metric: took 22.114487ms WaitForService to wait for kubelet
	I0719 15:52:42.615841   59208 kubeadm.go:582] duration metric: took 4m28.151407807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.615864   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.619021   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.619040   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.619050   59208 node_conditions.go:105] duration metric: took 3.180958ms to run NodePressure ...
	I0719 15:52:42.619060   59208 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.619067   59208 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.619079   59208 start.go:255] writing updated cluster config ...
	I0719 15:52:42.619329   59208 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.677117   59208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:42.679317   59208 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-601445" cluster and "default" namespace by default
	I0719 15:52:41.514013   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:44.012173   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:46.013717   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:48.013121   58376 pod_ready.go:81] duration metric: took 4m0.006772624s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:48.013143   58376 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:48.013150   58376 pod_ready.go:38] duration metric: took 4m4.417474484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:48.013165   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:48.013194   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:48.013234   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:48.067138   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.067166   58376 cri.go:89] found id: ""
	I0719 15:52:48.067175   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:48.067218   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.071486   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:48.071531   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:48.115491   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.115514   58376 cri.go:89] found id: ""
	I0719 15:52:48.115525   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:48.115583   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.119693   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:48.119750   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:48.161158   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.161185   58376 cri.go:89] found id: ""
	I0719 15:52:48.161194   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:48.161257   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.165533   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:48.165584   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:48.207507   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.207528   58376 cri.go:89] found id: ""
	I0719 15:52:48.207537   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:48.207596   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.212070   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:48.212145   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:48.250413   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.250441   58376 cri.go:89] found id: ""
	I0719 15:52:48.250451   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:48.250510   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.255025   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:48.255095   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:48.289898   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.289922   58376 cri.go:89] found id: ""
	I0719 15:52:48.289930   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:48.289976   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.294440   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:48.294489   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:48.329287   58376 cri.go:89] found id: ""
	I0719 15:52:48.329314   58376 logs.go:276] 0 containers: []
	W0719 15:52:48.329326   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:48.329332   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:48.329394   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:48.373215   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.373242   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.373248   58376 cri.go:89] found id: ""
	I0719 15:52:48.373257   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:48.373311   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.377591   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.381610   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:48.381635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:48.440106   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:48.440148   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:48.455200   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:48.455234   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.496729   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:48.496757   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.535475   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:48.535501   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.592954   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:48.592993   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.635925   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:48.635957   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.671611   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:48.671642   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:48.809648   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:48.809681   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.863327   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:48.863361   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.902200   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:48.902245   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.937497   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:48.937525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:49.446900   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:49.446933   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:51.988535   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:52.005140   58376 api_server.go:72] duration metric: took 4m16.116469116s to wait for apiserver process to appear ...
	I0719 15:52:52.005165   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:52.005206   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:52.005258   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:52.041113   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.041143   58376 cri.go:89] found id: ""
	I0719 15:52:52.041150   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:52.041199   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.045292   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:52.045349   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:52.086747   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.086770   58376 cri.go:89] found id: ""
	I0719 15:52:52.086778   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:52.086821   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.091957   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:52.092015   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:52.128096   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.128128   58376 cri.go:89] found id: ""
	I0719 15:52:52.128138   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:52.128204   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.132889   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:52.132949   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:52.168359   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.168389   58376 cri.go:89] found id: ""
	I0719 15:52:52.168398   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:52.168454   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.172577   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:52.172639   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:52.211667   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.211684   58376 cri.go:89] found id: ""
	I0719 15:52:52.211691   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:52.211740   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.215827   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:52.215893   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:52.252105   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.252130   58376 cri.go:89] found id: ""
	I0719 15:52:52.252140   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:52.252194   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.256407   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:52.256464   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:52.292646   58376 cri.go:89] found id: ""
	I0719 15:52:52.292675   58376 logs.go:276] 0 containers: []
	W0719 15:52:52.292685   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:52.292693   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:52.292755   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:52.326845   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.326875   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.326880   58376 cri.go:89] found id: ""
	I0719 15:52:52.326889   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:52.326946   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.331338   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.335530   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:52.335554   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.371981   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:52.372010   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.406921   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:52.406946   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.442975   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:52.443007   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:52.497838   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:52.497873   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:52.556739   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:52.556776   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:52.665610   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:52.665643   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.711547   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:52.711580   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.759589   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:52.759634   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.807300   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:52.807374   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.857159   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:52.857186   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.917896   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:52.917931   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:53.342603   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:53.342646   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:55.857727   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:52:55.861835   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:52:55.862804   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:55.862822   58376 api_server.go:131] duration metric: took 3.857650801s to wait for apiserver health ...
	I0719 15:52:55.862829   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:55.862852   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:55.862905   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:55.900840   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:55.900859   58376 cri.go:89] found id: ""
	I0719 15:52:55.900866   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:55.900909   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.906205   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:55.906291   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:55.950855   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:55.950879   58376 cri.go:89] found id: ""
	I0719 15:52:55.950887   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:55.950939   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.955407   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:55.955472   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:55.994954   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:55.994981   58376 cri.go:89] found id: ""
	I0719 15:52:55.994992   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:55.995052   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.999179   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:55.999241   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:56.036497   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.036521   58376 cri.go:89] found id: ""
	I0719 15:52:56.036530   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:56.036585   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.041834   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:56.041900   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:56.082911   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.082934   58376 cri.go:89] found id: ""
	I0719 15:52:56.082943   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:56.082998   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.087505   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:56.087571   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:56.124517   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.124544   58376 cri.go:89] found id: ""
	I0719 15:52:56.124554   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:56.124616   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.129221   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:56.129297   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:56.170151   58376 cri.go:89] found id: ""
	I0719 15:52:56.170177   58376 logs.go:276] 0 containers: []
	W0719 15:52:56.170193   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:56.170212   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:56.170292   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:56.218351   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:56.218377   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.218381   58376 cri.go:89] found id: ""
	I0719 15:52:56.218388   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:56.218437   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.223426   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.227742   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:56.227759   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.271701   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:56.271733   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:56.325333   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:56.325366   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:56.431391   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:56.431423   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:56.485442   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:56.485472   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:56.527493   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:56.527525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.563260   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:56.563289   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.600604   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:56.600635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.656262   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:56.656305   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:57.031511   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:57.031549   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:57.046723   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:57.046748   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:57.083358   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:57.083390   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:57.124108   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:57.124136   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:59.670804   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:59.670831   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.670836   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.670840   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.670844   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.670847   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.670850   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.670855   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.670859   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.670865   58376 system_pods.go:74] duration metric: took 3.808031391s to wait for pod list to return data ...
	I0719 15:52:59.670871   58376 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:59.673231   58376 default_sa.go:45] found service account: "default"
	I0719 15:52:59.673249   58376 default_sa.go:55] duration metric: took 2.372657ms for default service account to be created ...
	I0719 15:52:59.673255   58376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:59.678267   58376 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:59.678289   58376 system_pods.go:89] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.678296   58376 system_pods.go:89] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.678303   58376 system_pods.go:89] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.678310   58376 system_pods.go:89] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.678315   58376 system_pods.go:89] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.678322   58376 system_pods.go:89] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.678331   58376 system_pods.go:89] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.678341   58376 system_pods.go:89] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.678352   58376 system_pods.go:126] duration metric: took 5.090968ms to wait for k8s-apps to be running ...
	I0719 15:52:59.678362   58376 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:59.678411   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:59.695116   58376 system_svc.go:56] duration metric: took 16.750228ms WaitForService to wait for kubelet
	I0719 15:52:59.695139   58376 kubeadm.go:582] duration metric: took 4m23.806469478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:59.695163   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:59.697573   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:59.697592   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:59.697602   58376 node_conditions.go:105] duration metric: took 2.433643ms to run NodePressure ...
	I0719 15:52:59.697612   58376 start.go:241] waiting for startup goroutines ...
	I0719 15:52:59.697618   58376 start.go:246] waiting for cluster config update ...
	I0719 15:52:59.697629   58376 start.go:255] writing updated cluster config ...
	I0719 15:52:59.697907   58376 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:59.744965   58376 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:59.746888   58376 out.go:177] * Done! kubectl is now configured to use "embed-certs-817144" cluster and "default" namespace by default
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.153325047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404546153303842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=831a4208-8334-4847-abaa-96f874a885a4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.154014501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac6f8ceb-33b8-4195-ad41-d0be72dc9107 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.154065764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac6f8ceb-33b8-4195-ad41-d0be72dc9107 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.154143238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ac6f8ceb-33b8-4195-ad41-d0be72dc9107 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.186493178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5095978-5350-4cb6-b4b9-359faa3d0436 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.186576027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5095978-5350-4cb6-b4b9-359faa3d0436 name=/runtime.v1.RuntimeService/Version
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.188033422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02f0bc05-a0b2-43cf-add0-14ec19899ca6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.188584753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404546188553178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02f0bc05-a0b2-43cf-add0-14ec19899ca6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.189252134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb5adf58-8fe4-4309-9dc7-31267bb56240 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.189317366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb5adf58-8fe4-4309-9dc7-31267bb56240 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.189362001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fb5adf58-8fe4-4309-9dc7-31267bb56240 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.222707087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0002599c-430a-451b-92d8-171c86ee7ecc name=/runtime.v1.RuntimeService/Version
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.222811893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0002599c-430a-451b-92d8-171c86ee7ecc name=/runtime.v1.RuntimeService/Version
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.224060793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f83afdc5-1eb2-41d4-a926-a826ea9b876c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.224591305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404546224562334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f83afdc5-1eb2-41d4-a926-a826ea9b876c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.225285248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b868487b-fc49-42eb-8807-cf435875c75c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.225361473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b868487b-fc49-42eb-8807-cf435875c75c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.225397630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b868487b-fc49-42eb-8807-cf435875c75c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.259377506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a30505a8-c3a9-42fe-9e6f-414ecd5e4edb name=/runtime.v1.RuntimeService/Version
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.259464355Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a30505a8-c3a9-42fe-9e6f-414ecd5e4edb name=/runtime.v1.RuntimeService/Version
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.260916721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34bdc585-2a60-489f-8796-fd521be2c9e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.261421722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404546261394431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34bdc585-2a60-489f-8796-fd521be2c9e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.261970917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67a50175-1448-49a1-a236-ff6832d1c1c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.262038647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67a50175-1448-49a1-a236-ff6832d1c1c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 15:55:46 old-k8s-version-862924 crio[647]: time="2024-07-19 15:55:46.262118788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67a50175-1448-49a1-a236-ff6832d1c1c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul19 15:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051724] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039649] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.567082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.332449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.594221] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.070476] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.062261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077473] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.217641] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.149423] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.267895] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +6.718838] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.715316] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[ +12.018302] kauditd_printk_skb: 46 callbacks suppressed
	[Jul19 15:51] systemd-fstab-generator[5022]: Ignoring "noauto" option for root device
	[Jul19 15:53] systemd-fstab-generator[5300]: Ignoring "noauto" option for root device
	[  +0.062109] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:55:46 up 8 min,  0 users,  load average: 0.02, 0.10, 0.07
	Linux old-k8s-version-862924 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000d46a20)
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: goroutine 155 [select]:
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00002def0, 0x4f0ac20, 0xc0001134f0, 0x1, 0xc00009e0c0)
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c4e2a0, 0xc00009e0c0)
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00097ed20, 0xc0009724e0)
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 19 15:55:43 old-k8s-version-862924 kubelet[5481]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 19 15:55:43 old-k8s-version-862924 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 19 15:55:43 old-k8s-version-862924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 19 15:55:44 old-k8s-version-862924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 19 15:55:44 old-k8s-version-862924 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 19 15:55:44 old-k8s-version-862924 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 19 15:55:44 old-k8s-version-862924 kubelet[5541]: I0719 15:55:44.326054    5541 server.go:416] Version: v1.20.0
	Jul 19 15:55:44 old-k8s-version-862924 kubelet[5541]: I0719 15:55:44.326398    5541 server.go:837] Client rotation is on, will bootstrap in background
	Jul 19 15:55:44 old-k8s-version-862924 kubelet[5541]: I0719 15:55:44.329362    5541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 19 15:55:44 old-k8s-version-862924 kubelet[5541]: I0719 15:55:44.330764    5541 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 19 15:55:44 old-k8s-version-862924 kubelet[5541]: W0719 15:55:44.330792    5541 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (224.785484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-862924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (744.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
E0719 15:44:28.744292   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445: exit status 3 (3.167833695s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:44:30.130576   59080 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	E0719 15:44:30.130598   59080 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-601445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-601445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153880972s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-601445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445: exit status 3 (3.06204934s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 15:44:39.346627   59162 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host
	E0719 15:44:39.346653   59162 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.144:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-601445" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-382231 -n no-preload-382231
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-19 16:01:43.075638659 +0000 UTC m=+6077.871225198
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-382231 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-382231 logs -n 25: (2.4908644s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-127438 -- sudo                         | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-127438                                 | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-574044                           | kubernetes-upgrade-574044    | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:44:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:44:39.385142   59208 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:44:39.385249   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385257   59208 out.go:304] Setting ErrFile to fd 2...
	I0719 15:44:39.385261   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385405   59208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:44:39.385919   59208 out.go:298] Setting JSON to false
	I0719 15:44:39.386767   59208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5225,"bootTime":1721398654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:44:39.386817   59208 start.go:139] virtualization: kvm guest
	I0719 15:44:39.390104   59208 out.go:177] * [default-k8s-diff-port-601445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:44:39.391867   59208 notify.go:220] Checking for updates...
	I0719 15:44:39.391890   59208 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:44:39.393463   59208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:44:39.394883   59208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:44:39.396081   59208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:44:39.397280   59208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:44:39.398540   59208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:44:39.400177   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:44:39.400543   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.400601   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.415749   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0719 15:44:39.416104   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.416644   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.416664   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.416981   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.417206   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.417443   59208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:44:39.417751   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.417793   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.432550   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0719 15:44:39.433003   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.433478   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.433504   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.433836   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.434083   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.467474   59208 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:44:38.674498   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:39.468897   59208 start.go:297] selected driver: kvm2
	I0719 15:44:39.468921   59208 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.469073   59208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:44:39.470083   59208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.470178   59208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:44:39.485232   59208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:44:39.485586   59208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:44:39.485616   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:44:39.485624   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:44:39.485666   59208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.485752   59208 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.487537   59208 out.go:177] * Starting "default-k8s-diff-port-601445" primary control-plane node in "default-k8s-diff-port-601445" cluster
	I0719 15:44:39.488672   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:44:39.488709   59208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:44:39.488718   59208 cache.go:56] Caching tarball of preloaded images
	I0719 15:44:39.488795   59208 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:44:39.488807   59208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:44:39.488895   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:44:39.489065   59208 start.go:360] acquireMachinesLock for default-k8s-diff-port-601445: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:44:41.746585   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:47.826521   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:50.898507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:56.978531   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:00.050437   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:06.130631   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:09.202570   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:15.282481   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:18.354537   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:24.434488   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:27.506515   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:33.586522   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:36.658503   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:42.738573   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:45.810538   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:51.890547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:54.962507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:01.042509   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:04.114621   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:10.194576   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:13.266450   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:19.346524   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:22.418506   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:28.498553   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:31.570507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:37.650477   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:40.722569   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:46.802495   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:49.874579   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:55.954547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:59.026454   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:47:02.030619   58417 start.go:364] duration metric: took 4m36.939495617s to acquireMachinesLock for "no-preload-382231"
	I0719 15:47:02.030679   58417 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:02.030685   58417 fix.go:54] fixHost starting: 
	I0719 15:47:02.031010   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:02.031039   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:02.046256   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0719 15:47:02.046682   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:02.047151   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:47:02.047178   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:02.047573   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:02.047818   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:02.048023   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:47:02.049619   58417 fix.go:112] recreateIfNeeded on no-preload-382231: state=Stopped err=<nil>
	I0719 15:47:02.049641   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	W0719 15:47:02.049785   58417 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:02.051800   58417 out.go:177] * Restarting existing kvm2 VM for "no-preload-382231" ...
	I0719 15:47:02.028090   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:02.028137   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028489   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:47:02.028517   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:47:02.030488   58376 machine.go:97] duration metric: took 4m37.428160404s to provisionDockerMachine
	I0719 15:47:02.030529   58376 fix.go:56] duration metric: took 4m37.450063037s for fixHost
	I0719 15:47:02.030535   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 4m37.450081944s
	W0719 15:47:02.030559   58376 start.go:714] error starting host: provision: host is not running
	W0719 15:47:02.030673   58376 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 15:47:02.030686   58376 start.go:729] Will try again in 5 seconds ...
	I0719 15:47:02.053160   58417 main.go:141] libmachine: (no-preload-382231) Calling .Start
	I0719 15:47:02.053325   58417 main.go:141] libmachine: (no-preload-382231) Ensuring networks are active...
	I0719 15:47:02.054289   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network default is active
	I0719 15:47:02.054786   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network mk-no-preload-382231 is active
	I0719 15:47:02.055259   58417 main.go:141] libmachine: (no-preload-382231) Getting domain xml...
	I0719 15:47:02.056202   58417 main.go:141] libmachine: (no-preload-382231) Creating domain...
	I0719 15:47:03.270495   58417 main.go:141] libmachine: (no-preload-382231) Waiting to get IP...
	I0719 15:47:03.271595   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.272074   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.272151   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.272057   59713 retry.go:31] will retry after 239.502065ms: waiting for machine to come up
	I0719 15:47:03.513745   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.514224   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.514264   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.514191   59713 retry.go:31] will retry after 315.982717ms: waiting for machine to come up
	I0719 15:47:03.831739   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.832155   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.832187   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.832111   59713 retry.go:31] will retry after 468.820113ms: waiting for machine to come up
	I0719 15:47:04.302865   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.303273   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.303306   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.303236   59713 retry.go:31] will retry after 526.764683ms: waiting for machine to come up
	I0719 15:47:04.832048   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.832551   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.832583   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.832504   59713 retry.go:31] will retry after 754.533212ms: waiting for machine to come up
	I0719 15:47:07.032310   58376 start.go:360] acquireMachinesLock for embed-certs-817144: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:05.588374   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:05.588834   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:05.588862   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:05.588785   59713 retry.go:31] will retry after 757.18401ms: waiting for machine to come up
	I0719 15:47:06.347691   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:06.348135   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:06.348164   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:06.348053   59713 retry.go:31] will retry after 1.097437331s: waiting for machine to come up
	I0719 15:47:07.446836   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:07.447199   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:07.447219   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:07.447158   59713 retry.go:31] will retry after 1.448513766s: waiting for machine to come up
	I0719 15:47:08.897886   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:08.898289   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:08.898317   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:08.898216   59713 retry.go:31] will retry after 1.583843671s: waiting for machine to come up
	I0719 15:47:10.483476   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:10.483934   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:10.483963   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:10.483864   59713 retry.go:31] will retry after 1.86995909s: waiting for machine to come up
	I0719 15:47:12.355401   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:12.355802   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:12.355827   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:12.355762   59713 retry.go:31] will retry after 2.577908462s: waiting for machine to come up
	I0719 15:47:14.934837   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:14.935263   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:14.935285   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:14.935225   59713 retry.go:31] will retry after 3.158958575s: waiting for machine to come up
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:18.095456   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095912   58417 main.go:141] libmachine: (no-preload-382231) Found IP for machine: 192.168.39.227
	I0719 15:47:18.095936   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has current primary IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095942   58417 main.go:141] libmachine: (no-preload-382231) Reserving static IP address...
	I0719 15:47:18.096317   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.096357   58417 main.go:141] libmachine: (no-preload-382231) Reserved static IP address: 192.168.39.227
	I0719 15:47:18.096376   58417 main.go:141] libmachine: (no-preload-382231) DBG | skip adding static IP to network mk-no-preload-382231 - found existing host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"}
	I0719 15:47:18.096392   58417 main.go:141] libmachine: (no-preload-382231) DBG | Getting to WaitForSSH function...
	I0719 15:47:18.096407   58417 main.go:141] libmachine: (no-preload-382231) Waiting for SSH to be available...
	I0719 15:47:18.098619   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.098978   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.099008   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.099122   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH client type: external
	I0719 15:47:18.099151   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa (-rw-------)
	I0719 15:47:18.099183   58417 main.go:141] libmachine: (no-preload-382231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:18.099196   58417 main.go:141] libmachine: (no-preload-382231) DBG | About to run SSH command:
	I0719 15:47:18.099210   58417 main.go:141] libmachine: (no-preload-382231) DBG | exit 0
	I0719 15:47:18.222285   58417 main.go:141] libmachine: (no-preload-382231) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:18.222607   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetConfigRaw
	I0719 15:47:18.223181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.225751   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226062   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.226105   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226327   58417 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:47:18.226504   58417 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:18.226520   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:18.226684   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.228592   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.228936   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.228960   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.229094   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.229246   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229398   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229516   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.229663   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.229887   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.229901   58417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:18.330731   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:18.330764   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331053   58417 buildroot.go:166] provisioning hostname "no-preload-382231"
	I0719 15:47:18.331084   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331282   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.333905   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334212   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.334270   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334331   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.334510   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334705   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334850   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.335030   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.335216   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.335230   58417 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-382231 && echo "no-preload-382231" | sudo tee /etc/hostname
	I0719 15:47:18.453128   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-382231
	
	I0719 15:47:18.453151   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.455964   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456323   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.456349   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456549   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.456822   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457010   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457158   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.457300   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.457535   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.457561   58417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-382231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-382231/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-382231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:18.568852   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:18.568878   58417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:18.568902   58417 buildroot.go:174] setting up certificates
	I0719 15:47:18.568915   58417 provision.go:84] configureAuth start
	I0719 15:47:18.568924   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.569240   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.571473   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.571757   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.571783   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.572029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.573941   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574213   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.574247   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574393   58417 provision.go:143] copyHostCerts
	I0719 15:47:18.574455   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:18.574465   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:18.574528   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:18.574615   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:18.574622   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:18.574645   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:18.574696   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:18.574703   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:18.574722   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:18.574768   58417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.no-preload-382231 san=[127.0.0.1 192.168.39.227 localhost minikube no-preload-382231]
	I0719 15:47:18.636408   58417 provision.go:177] copyRemoteCerts
	I0719 15:47:18.636458   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:18.636477   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.638719   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639021   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.639054   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639191   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.639379   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.639532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.639795   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:18.720305   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:18.742906   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:18.764937   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:47:18.787183   58417 provision.go:87] duration metric: took 218.257504ms to configureAuth
	I0719 15:47:18.787205   58417 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:18.787355   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:47:18.787418   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.789685   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.789992   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.790017   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.790181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.790366   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790632   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.790770   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.790929   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.790943   58417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:19.053326   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:19.053350   58417 machine.go:97] duration metric: took 826.83404ms to provisionDockerMachine
	I0719 15:47:19.053364   58417 start.go:293] postStartSetup for "no-preload-382231" (driver="kvm2")
	I0719 15:47:19.053379   58417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:19.053409   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.053733   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:19.053755   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.056355   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056709   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.056737   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056884   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.057037   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.057172   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.057370   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.136785   58417 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:19.140756   58417 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:19.140777   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:19.140847   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:19.140941   58417 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:19.141044   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:19.150247   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:19.172800   58417 start.go:296] duration metric: took 119.424607ms for postStartSetup
	I0719 15:47:19.172832   58417 fix.go:56] duration metric: took 17.142146552s for fixHost
	I0719 15:47:19.172849   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.175427   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.175816   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.175851   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.176027   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.176281   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176636   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.176892   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:19.177051   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:19.177061   58417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:19.278564   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404039.251890495
	
	I0719 15:47:19.278594   58417 fix.go:216] guest clock: 1721404039.251890495
	I0719 15:47:19.278605   58417 fix.go:229] Guest: 2024-07-19 15:47:19.251890495 +0000 UTC Remote: 2024-07-19 15:47:19.172835531 +0000 UTC m=+294.220034318 (delta=79.054964ms)
	I0719 15:47:19.278651   58417 fix.go:200] guest clock delta is within tolerance: 79.054964ms
	I0719 15:47:19.278659   58417 start.go:83] releasing machines lock for "no-preload-382231", held for 17.247997118s
	I0719 15:47:19.278692   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.279029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:19.281674   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282034   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.282063   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282221   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282750   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282935   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282991   58417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:19.283061   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.283095   58417 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:19.283116   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.285509   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285805   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.285828   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285846   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285959   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286182   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286276   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.286300   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.286468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286481   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286632   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.286672   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286806   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286935   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.363444   58417 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:19.387514   58417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:19.545902   58417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:19.551747   58417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:19.551812   58417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:19.568563   58417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:19.568589   58417 start.go:495] detecting cgroup driver to use...
	I0719 15:47:19.568654   58417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:19.589440   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:19.604889   58417 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:19.604962   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:19.624114   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:19.638265   58417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:19.752880   58417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:19.900078   58417 docker.go:233] disabling docker service ...
	I0719 15:47:19.900132   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:19.914990   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:19.928976   58417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:20.079363   58417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:20.203629   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:20.218502   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:20.237028   58417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 15:47:20.237089   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.248514   58417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:20.248597   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.260162   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.272166   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.283341   58417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:20.294687   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.305495   58417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.328024   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.339666   58417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:20.349271   58417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:20.349314   58417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:20.364130   58417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:20.376267   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:20.501259   58417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:20.643763   58417 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:20.643828   58417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:20.648525   58417 start.go:563] Will wait 60s for crictl version
	I0719 15:47:20.648586   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:20.652256   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:20.689386   58417 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:20.689468   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.720662   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.751393   58417 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:20.752939   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:20.755996   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756367   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:20.756395   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756723   58417 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:20.760962   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:20.776973   58417 kubeadm.go:883] updating cluster {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:20.777084   58417 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 15:47:20.777120   58417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:20.814520   58417 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 15:47:20.814547   58417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:20.814631   58417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:20.814650   58417 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.814657   58417 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.814682   58417 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.814637   58417 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.814736   58417 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.814808   58417 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.814742   58417 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.816435   58417 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.816446   58417 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.816513   58417 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.816535   58417 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816559   58417 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.816719   58417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:21.003845   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 15:47:21.028954   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.039628   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.041391   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.065499   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.084966   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.142812   58417 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 15:47:21.142873   58417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 15:47:21.142905   58417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.142921   58417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 15:47:21.142939   58417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.142962   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142877   58417 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.143025   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142983   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.160141   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.182875   58417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 15:47:21.182918   58417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.182945   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.182958   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.182957   58417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 15:47:21.182992   58417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.183029   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.183044   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.183064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.272688   58417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 15:47:21.272724   58417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.272768   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.272783   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272825   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.272876   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272906   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 15:47:21.272931   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.272971   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:21.272997   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 15:47:21.273064   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:21.326354   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326356   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.326441   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 15:47:21.326457   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326459   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326492   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326497   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.326529   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 15:47:21.326535   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 15:47:21.326633   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.363401   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:21.363496   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:22.268448   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.010876   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.684346805s)
	I0719 15:47:24.010910   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 15:47:24.010920   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.684439864s)
	I0719 15:47:24.010952   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 15:47:24.010930   58417 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.010993   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.684342001s)
	I0719 15:47:24.011014   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.011019   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011046   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.647533327s)
	I0719 15:47:24.011066   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011098   58417 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742620594s)
	I0719 15:47:24.011137   58417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 15:47:24.011170   58417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.011204   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:27.292973   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.281931356s)
	I0719 15:47:27.293008   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 15:47:27.293001   58417 ssh_runner.go:235] Completed: which crictl: (3.281778521s)
	I0719 15:47:27.293043   58417 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:27.293064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:27.293086   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:29.269642   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976526914s)
	I0719 15:47:29.269676   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 15:47:29.269698   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269641   58417 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.97655096s)
	I0719 15:47:29.269748   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269773   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 15:47:29.269875   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:31.242199   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.972421845s)
	I0719 15:47:31.242257   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 15:47:31.242273   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972374564s)
	I0719 15:47:31.242283   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:31.242306   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 15:47:31.242334   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:32.592736   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.350379333s)
	I0719 15:47:32.592762   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 15:47:32.592782   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:32.592817   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:34.547084   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954243196s)
	I0719 15:47:34.547122   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 15:47:34.547155   58417 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:34.547231   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.759098   59208 start.go:364] duration metric: took 2m59.27000152s to acquireMachinesLock for "default-k8s-diff-port-601445"
	I0719 15:47:38.759165   59208 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:38.759176   59208 fix.go:54] fixHost starting: 
	I0719 15:47:38.759633   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:38.759685   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:38.779587   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0719 15:47:38.779979   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:38.780480   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:47:38.780497   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:38.780888   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:38.781129   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:38.781260   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:47:38.782786   59208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601445: state=Stopped err=<nil>
	I0719 15:47:38.782860   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	W0719 15:47:38.783056   59208 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:38.785037   59208 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-601445" ...
	I0719 15:47:38.786497   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Start
	I0719 15:47:38.786691   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring networks are active...
	I0719 15:47:38.787520   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network default is active
	I0719 15:47:38.787819   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network mk-default-k8s-diff-port-601445 is active
	I0719 15:47:38.788418   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Getting domain xml...
	I0719 15:47:38.789173   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Creating domain...
	I0719 15:47:35.191148   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 15:47:35.191193   58417 cache_images.go:123] Successfully loaded all cached images
	I0719 15:47:35.191198   58417 cache_images.go:92] duration metric: took 14.376640053s to LoadCachedImages
	I0719 15:47:35.191209   58417 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0-beta.0 crio true true} ...
	I0719 15:47:35.191329   58417 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-382231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:35.191424   58417 ssh_runner.go:195] Run: crio config
	I0719 15:47:35.236248   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:35.236276   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:35.236288   58417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:35.236309   58417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-382231 NodeName:no-preload-382231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:47:35.236464   58417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-382231"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:35.236525   58417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 15:47:35.247524   58417 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:35.247611   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:35.257583   58417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 15:47:35.275057   58417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 15:47:35.291468   58417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 15:47:35.308021   58417 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:35.312121   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:35.324449   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:35.451149   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:35.477844   58417 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231 for IP: 192.168.39.227
	I0719 15:47:35.477868   58417 certs.go:194] generating shared ca certs ...
	I0719 15:47:35.477887   58417 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:35.478043   58417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:35.478093   58417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:35.478103   58417 certs.go:256] generating profile certs ...
	I0719 15:47:35.478174   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.key
	I0719 15:47:35.478301   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key.46f9a235
	I0719 15:47:35.478339   58417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key
	I0719 15:47:35.478482   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:35.478520   58417 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:35.478530   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:35.478549   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:35.478569   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:35.478591   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:35.478628   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:35.479291   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:35.523106   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:35.546934   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:35.585616   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:35.617030   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:47:35.641486   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:47:35.680051   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:35.703679   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:47:35.728088   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:35.751219   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:35.774149   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:35.796985   58417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:35.813795   58417 ssh_runner.go:195] Run: openssl version
	I0719 15:47:35.819568   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:35.830350   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834792   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834847   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.840531   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:35.851584   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:35.862655   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867139   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867199   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.872916   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:35.883986   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:35.894795   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899001   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899049   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.904496   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:35.915180   58417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:35.919395   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:35.926075   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:35.931870   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:35.938089   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:35.944079   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:35.950449   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:35.956291   58417 kubeadm.go:392] StartCluster: {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:35.956396   58417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:35.956452   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:35.993976   58417 cri.go:89] found id: ""
	I0719 15:47:35.994047   58417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:36.004507   58417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:36.004532   58417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:36.004579   58417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:36.014644   58417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:36.015628   58417 kubeconfig.go:125] found "no-preload-382231" server: "https://192.168.39.227:8443"
	I0719 15:47:36.017618   58417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:36.027252   58417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0719 15:47:36.027281   58417 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:36.027292   58417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:36.027350   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:36.066863   58417 cri.go:89] found id: ""
	I0719 15:47:36.066934   58417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:36.082971   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:36.092782   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:36.092802   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:36.092841   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:36.101945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:36.101998   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:36.111368   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:36.120402   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:36.120447   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:36.130124   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.138945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:36.138990   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.148176   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:36.157008   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:36.157060   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:36.166273   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:36.176032   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:36.291855   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.285472   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.476541   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.547807   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.652551   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:37.652649   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.153088   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.653690   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.718826   58417 api_server.go:72] duration metric: took 1.066275053s to wait for apiserver process to appear ...
	I0719 15:47:38.718858   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:47:38.718891   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:41.984204   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:41.984237   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:41.984255   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.031024   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:42.031055   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:42.219815   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.256851   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.256888   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:42.719015   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.756668   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.756705   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.219173   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.255610   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:43.255645   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.719116   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.725453   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:47:43.739070   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:47:43.739108   58417 api_server.go:131] duration metric: took 5.020238689s to wait for apiserver health ...
	I0719 15:47:43.739119   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:43.739128   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:43.741458   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:47:40.069048   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting to get IP...
	I0719 15:47:40.069866   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070409   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070480   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.070379   59996 retry.go:31] will retry after 299.168281ms: waiting for machine to come up
	I0719 15:47:40.370939   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371381   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.371340   59996 retry.go:31] will retry after 388.345842ms: waiting for machine to come up
	I0719 15:47:40.761301   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762861   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762889   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.762797   59996 retry.go:31] will retry after 305.39596ms: waiting for machine to come up
	I0719 15:47:41.070215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070791   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070823   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.070746   59996 retry.go:31] will retry after 452.50233ms: waiting for machine to come up
	I0719 15:47:41.525465   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.525997   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.526019   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.525920   59996 retry.go:31] will retry after 686.050268ms: waiting for machine to come up
	I0719 15:47:42.214012   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214513   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214545   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:42.214465   59996 retry.go:31] will retry after 867.815689ms: waiting for machine to come up
	I0719 15:47:43.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084240   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:43.084198   59996 retry.go:31] will retry after 1.006018507s: waiting for machine to come up
	I0719 15:47:44.092571   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093050   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:44.092992   59996 retry.go:31] will retry after 961.604699ms: waiting for machine to come up
	I0719 15:47:43.743125   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:47:43.780558   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:47:43.825123   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:47:43.849564   58417 system_pods.go:59] 8 kube-system pods found
	I0719 15:47:43.849608   58417 system_pods.go:61] "coredns-5cfdc65f69-9p4dr" [b6744bc9-b683-4f7e-b506-a95eb58ac308] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:47:43.849620   58417 system_pods.go:61] "etcd-no-preload-382231" [1f2704ae-84a0-4636-9826-f6bb5d2cb8b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:47:43.849632   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [e4ae90fb-9024-4420-9249-6f936ff43894] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:47:43.849643   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [ceb3538d-a6b9-4135-b044-b139003baf35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:47:43.849650   58417 system_pods.go:61] "kube-proxy-z2z9r" [fdc0eb8f-2884-436b-ba1e-4c71107f756c] Running
	I0719 15:47:43.849657   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [5ae3221b-7186-4dbe-9b1b-fb4c8c239c62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:47:43.849677   58417 system_pods.go:61] "metrics-server-78fcd8795b-zwr8g" [4d4de9aa-89f2-4cf4-85c2-26df25bd82c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:47:43.849687   58417 system_pods.go:61] "storage-provisioner" [ab5ce17f-a0da-4ab7-803e-245ba4363d09] Running
	I0719 15:47:43.849696   58417 system_pods.go:74] duration metric: took 24.54438ms to wait for pod list to return data ...
	I0719 15:47:43.849709   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:47:43.864512   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:47:43.864636   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:47:43.864684   58417 node_conditions.go:105] duration metric: took 14.967708ms to run NodePressure ...
	I0719 15:47:43.864727   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:44.524399   58417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531924   58417 kubeadm.go:739] kubelet initialised
	I0719 15:47:44.531944   58417 kubeadm.go:740] duration metric: took 7.516197ms waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531952   58417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:47:44.538016   58417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:45.055856   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056318   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056347   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:45.056263   59996 retry.go:31] will retry after 1.300059023s: waiting for machine to come up
	I0719 15:47:46.357875   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358379   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358407   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:46.358331   59996 retry.go:31] will retry after 2.269558328s: waiting for machine to come up
	I0719 15:47:48.630965   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631641   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631674   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:48.631546   59996 retry.go:31] will retry after 2.829487546s: waiting for machine to come up
	I0719 15:47:47.449778   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:48.045481   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:48.045508   58417 pod_ready.go:81] duration metric: took 3.507466621s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.045521   58417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.463569   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464003   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:51.463968   59996 retry.go:31] will retry after 2.917804786s: waiting for machine to come up
	I0719 15:47:54.383261   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383967   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383993   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:54.383924   59996 retry.go:31] will retry after 4.044917947s: waiting for machine to come up
	I0719 15:47:50.052168   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:51.052114   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:51.052135   58417 pod_ready.go:81] duration metric: took 3.006607122s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:51.052144   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059540   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:52.059563   58417 pod_ready.go:81] duration metric: took 1.007411773s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059576   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.066338   58417 pod_ready.go:102] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:54.567056   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.567076   58417 pod_ready.go:81] duration metric: took 2.507493559s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.567085   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571655   58417 pod_ready.go:92] pod "kube-proxy-z2z9r" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.571672   58417 pod_ready.go:81] duration metric: took 4.581191ms for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571680   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.575983   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.576005   58417 pod_ready.go:81] duration metric: took 4.315788ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.576017   58417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.432420   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432945   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Found IP for machine: 192.168.61.144
	I0719 15:47:58.432976   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has current primary IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432988   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserving static IP address...
	I0719 15:47:58.433361   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.433395   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | skip adding static IP to network mk-default-k8s-diff-port-601445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"}
	I0719 15:47:58.433412   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserved static IP address: 192.168.61.144
	I0719 15:47:58.433430   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for SSH to be available...
	I0719 15:47:58.433442   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Getting to WaitForSSH function...
	I0719 15:47:58.435448   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435770   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.435807   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435868   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH client type: external
	I0719 15:47:58.435930   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa (-rw-------)
	I0719 15:47:58.435973   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:58.435992   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | About to run SSH command:
	I0719 15:47:58.436002   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | exit 0
	I0719 15:47:58.562187   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:58.562564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetConfigRaw
	I0719 15:47:58.563233   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.565694   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566042   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.566066   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566301   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:47:58.566469   59208 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:58.566489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:58.566684   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.569109   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569485   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.569512   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569594   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.569763   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.569912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.570022   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.570167   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.570398   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.570412   59208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:58.675164   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:58.675217   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675455   59208 buildroot.go:166] provisioning hostname "default-k8s-diff-port-601445"
	I0719 15:47:58.675487   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.678103   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678522   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.678564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678721   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.678908   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679074   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679198   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.679345   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.679516   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.679531   59208 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-601445 && echo "default-k8s-diff-port-601445" | sudo tee /etc/hostname
	I0719 15:47:58.802305   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-601445
	
	I0719 15:47:58.802336   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.805215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805582   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.805613   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805796   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.805981   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806139   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806322   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.806517   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.806689   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.806706   59208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-601445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-601445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-601445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:58.919959   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:58.919985   59208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:58.920019   59208 buildroot.go:174] setting up certificates
	I0719 15:47:58.920031   59208 provision.go:84] configureAuth start
	I0719 15:47:58.920041   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.920283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.922837   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923193   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.923225   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923413   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.925832   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926128   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.926156   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926297   59208 provision.go:143] copyHostCerts
	I0719 15:47:58.926360   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:58.926374   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:58.926425   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:58.926512   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:58.926520   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:58.926543   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:58.926600   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:58.926609   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:58.926630   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:58.926682   59208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-601445 san=[127.0.0.1 192.168.61.144 default-k8s-diff-port-601445 localhost minikube]
	I0719 15:47:59.080911   59208 provision.go:177] copyRemoteCerts
	I0719 15:47:59.080966   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:59.080990   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084029   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.084059   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084219   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.084411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.084531   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.084674   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.172754   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:59.198872   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 15:47:59.222898   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:47:59.246017   59208 provision.go:87] duration metric: took 325.975105ms to configureAuth
	I0719 15:47:59.246037   59208 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:59.246215   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:47:59.246312   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.248757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249079   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.249111   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249354   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.249526   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249679   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249779   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.249924   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.250142   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.250161   59208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:59.743101   58376 start.go:364] duration metric: took 52.710718223s to acquireMachinesLock for "embed-certs-817144"
	I0719 15:47:59.743169   58376 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:59.743177   58376 fix.go:54] fixHost starting: 
	I0719 15:47:59.743553   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:59.743591   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:59.760837   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0719 15:47:59.761216   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:59.761734   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:47:59.761754   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:59.762080   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:59.762291   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:47:59.762504   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:47:59.764044   58376 fix.go:112] recreateIfNeeded on embed-certs-817144: state=Stopped err=<nil>
	I0719 15:47:59.764067   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	W0719 15:47:59.764217   58376 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:59.766063   58376 out.go:177] * Restarting existing kvm2 VM for "embed-certs-817144" ...
	I0719 15:47:56.582753   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:58.583049   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:59.508289   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:59.508327   59208 machine.go:97] duration metric: took 941.842272ms to provisionDockerMachine
	I0719 15:47:59.508343   59208 start.go:293] postStartSetup for "default-k8s-diff-port-601445" (driver="kvm2")
	I0719 15:47:59.508359   59208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:59.508383   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.508687   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:59.508720   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.511449   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.511887   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.511911   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.512095   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.512275   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.512437   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.512580   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.596683   59208 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:59.600761   59208 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:59.600782   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:59.600841   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:59.600911   59208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:59.600996   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:59.609867   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:59.633767   59208 start.go:296] duration metric: took 125.408568ms for postStartSetup
	I0719 15:47:59.633803   59208 fix.go:56] duration metric: took 20.874627736s for fixHost
	I0719 15:47:59.633825   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.636600   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.636944   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.636977   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.637121   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.637328   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637495   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637640   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.637811   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.637989   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.637999   59208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:59.742929   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404079.728807147
	
	I0719 15:47:59.742957   59208 fix.go:216] guest clock: 1721404079.728807147
	I0719 15:47:59.742967   59208 fix.go:229] Guest: 2024-07-19 15:47:59.728807147 +0000 UTC Remote: 2024-07-19 15:47:59.633807395 +0000 UTC m=+200.280673126 (delta=94.999752ms)
	I0719 15:47:59.743008   59208 fix.go:200] guest clock delta is within tolerance: 94.999752ms
	I0719 15:47:59.743013   59208 start.go:83] releasing machines lock for "default-k8s-diff-port-601445", held for 20.983876369s
	I0719 15:47:59.743040   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.743262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:59.746145   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746501   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.746534   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746662   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747297   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747461   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747553   59208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:59.747603   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.747714   59208 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:59.747738   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.750268   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750583   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750751   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750916   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750932   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.750942   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.751127   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751170   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.751269   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751353   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751421   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.751489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751646   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.834888   59208 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:59.859285   59208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:00.009771   59208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:00.015906   59208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:00.015973   59208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:00.032129   59208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:00.032150   59208 start.go:495] detecting cgroup driver to use...
	I0719 15:48:00.032215   59208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:00.050052   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:00.063282   59208 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:00.063341   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:00.078073   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:00.092872   59208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:00.217105   59208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:00.364335   59208 docker.go:233] disabling docker service ...
	I0719 15:48:00.364403   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:00.384138   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:00.400280   59208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:00.543779   59208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:00.671512   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:00.687337   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:00.708629   59208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:00.708690   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.720508   59208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:00.720580   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.732952   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.743984   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.756129   59208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:00.766873   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.777481   59208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.799865   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.812450   59208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:00.822900   59208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:00.822964   59208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:00.836117   59208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:00.845958   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:00.959002   59208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:01.104519   59208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:01.104598   59208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:01.110652   59208 start.go:563] Will wait 60s for crictl version
	I0719 15:48:01.110711   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:48:01.114358   59208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:01.156969   59208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:01.157063   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.187963   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.219925   59208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.221101   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:48:01.224369   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:01.224789   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224989   59208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:01.229813   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:01.243714   59208 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:01.243843   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:01.243886   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:01.283013   59208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:01.283093   59208 ssh_runner.go:195] Run: which lz4
	I0719 15:48:01.287587   59208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:01.291937   59208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:01.291965   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:02.810751   59208 crio.go:462] duration metric: took 1.52319928s to copy over tarball
	I0719 15:48:02.810846   59208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:59.767270   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Start
	I0719 15:47:59.767433   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring networks are active...
	I0719 15:47:59.768056   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network default is active
	I0719 15:47:59.768371   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network mk-embed-certs-817144 is active
	I0719 15:47:59.768804   58376 main.go:141] libmachine: (embed-certs-817144) Getting domain xml...
	I0719 15:47:59.769396   58376 main.go:141] libmachine: (embed-certs-817144) Creating domain...
	I0719 15:48:01.024457   58376 main.go:141] libmachine: (embed-certs-817144) Waiting to get IP...
	I0719 15:48:01.025252   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.025697   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.025741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.025660   60153 retry.go:31] will retry after 211.260956ms: waiting for machine to come up
	I0719 15:48:01.238027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.238561   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.238588   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.238529   60153 retry.go:31] will retry after 346.855203ms: waiting for machine to come up
	I0719 15:48:01.587201   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.587773   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.587815   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.587736   60153 retry.go:31] will retry after 327.69901ms: waiting for machine to come up
	I0719 15:48:01.917433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.917899   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.917931   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.917864   60153 retry.go:31] will retry after 474.430535ms: waiting for machine to come up
	I0719 15:48:02.393610   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.394139   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.394168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.394061   60153 retry.go:31] will retry after 491.247455ms: waiting for machine to come up
	I0719 15:48:02.886826   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.887296   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.887329   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.887249   60153 retry.go:31] will retry after 661.619586ms: waiting for machine to come up
	I0719 15:48:03.550633   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:03.551175   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:03.551199   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:03.551126   60153 retry.go:31] will retry after 1.10096194s: waiting for machine to come up
	I0719 15:48:00.583866   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:02.585144   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.112520   59208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301644218s)
	I0719 15:48:05.112555   59208 crio.go:469] duration metric: took 2.301774418s to extract the tarball
	I0719 15:48:05.112565   59208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:05.151199   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:05.193673   59208 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:05.193701   59208 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:05.193712   59208 kubeadm.go:934] updating node { 192.168.61.144 8444 v1.30.3 crio true true} ...
	I0719 15:48:05.193836   59208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-601445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:05.193919   59208 ssh_runner.go:195] Run: crio config
	I0719 15:48:05.239103   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:05.239131   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:05.239146   59208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:05.239176   59208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-601445 NodeName:default-k8s-diff-port-601445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:05.239374   59208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-601445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:05.239441   59208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:05.249729   59208 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:05.249799   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:05.259540   59208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 15:48:05.277388   59208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:05.294497   59208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 15:48:05.313990   59208 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:05.318959   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:05.332278   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:05.463771   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:05.480474   59208 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445 for IP: 192.168.61.144
	I0719 15:48:05.480499   59208 certs.go:194] generating shared ca certs ...
	I0719 15:48:05.480520   59208 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:05.480674   59208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:05.480732   59208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:05.480746   59208 certs.go:256] generating profile certs ...
	I0719 15:48:05.480859   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.key
	I0719 15:48:05.480937   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key.e31ea710
	I0719 15:48:05.480992   59208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key
	I0719 15:48:05.481128   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:05.481165   59208 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:05.481180   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:05.481210   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:05.481245   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:05.481276   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:05.481334   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:05.481940   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:05.524604   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:05.562766   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:05.618041   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:05.660224   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 15:48:05.689232   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:05.713890   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:05.738923   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:05.764447   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:05.793905   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:05.823630   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:05.849454   59208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:05.868309   59208 ssh_runner.go:195] Run: openssl version
	I0719 15:48:05.874423   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:05.887310   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.891994   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.892057   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.898173   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:05.911541   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:05.922829   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927537   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927600   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.933642   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:05.946269   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:05.958798   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963899   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963959   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.969801   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:05.980966   59208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:05.985487   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:05.991303   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:05.997143   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:06.003222   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:06.008984   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:06.014939   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:06.020976   59208 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:06.021059   59208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:06.021106   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.066439   59208 cri.go:89] found id: ""
	I0719 15:48:06.066503   59208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:06.080640   59208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:06.080663   59208 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:06.080730   59208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:06.093477   59208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:06.094740   59208 kubeconfig.go:125] found "default-k8s-diff-port-601445" server: "https://192.168.61.144:8444"
	I0719 15:48:06.096907   59208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:06.107974   59208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.144
	I0719 15:48:06.108021   59208 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:06.108035   59208 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:06.108109   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.156149   59208 cri.go:89] found id: ""
	I0719 15:48:06.156222   59208 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:06.172431   59208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:06.182482   59208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:06.182511   59208 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:06.182562   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 15:48:06.192288   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:06.192361   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:06.202613   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 15:48:06.212553   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:06.212624   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:06.223086   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.233949   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:06.234007   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.247224   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 15:48:06.257851   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:06.257908   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:06.268650   59208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:06.279549   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:06.421964   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.407768   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.614213   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.686560   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.769476   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:07.769590   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.270472   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.770366   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.795057   59208 api_server.go:72] duration metric: took 1.025580277s to wait for apiserver process to appear ...
	I0719 15:48:08.795086   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:08.795112   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:08.795617   59208 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0719 15:48:09.295459   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:04.653309   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:04.653784   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:04.653846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:04.653753   60153 retry.go:31] will retry after 1.276153596s: waiting for machine to come up
	I0719 15:48:05.931365   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:05.931820   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:05.931848   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:05.931798   60153 retry.go:31] will retry after 1.372328403s: waiting for machine to come up
	I0719 15:48:07.305390   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:07.305892   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:07.305922   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:07.305850   60153 retry.go:31] will retry after 1.738311105s: waiting for machine to come up
	I0719 15:48:09.046095   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:09.046526   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:09.046558   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:09.046481   60153 retry.go:31] will retry after 2.169449629s: waiting for machine to come up
	I0719 15:48:05.084157   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:07.583246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:09.584584   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:11.457584   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.457651   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.457670   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.490130   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.490165   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.795439   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.803724   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:11.803757   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.295287   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.300002   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:12.300034   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.795285   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.800067   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:48:12.808020   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:12.808045   59208 api_server.go:131] duration metric: took 4.012952016s to wait for apiserver health ...
	I0719 15:48:12.808055   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:12.808064   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:12.810134   59208 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.812011   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:12.824520   59208 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:12.846711   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:12.855286   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:12.855315   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:12.855322   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:12.855329   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:12.855335   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:12.855345   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:12.855353   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:12.855360   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:12.855369   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:12.855377   59208 system_pods.go:74] duration metric: took 8.645314ms to wait for pod list to return data ...
	I0719 15:48:12.855390   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:12.858531   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:12.858556   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:12.858566   59208 node_conditions.go:105] duration metric: took 3.171526ms to run NodePressure ...
	I0719 15:48:12.858581   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:13.176014   59208 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180575   59208 kubeadm.go:739] kubelet initialised
	I0719 15:48:13.180602   59208 kubeadm.go:740] duration metric: took 4.561708ms waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180612   59208 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:13.187723   59208 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.204023   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204052   59208 pod_ready.go:81] duration metric: took 16.303152ms for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.204061   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204070   59208 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.212768   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212790   59208 pod_ready.go:81] duration metric: took 8.709912ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.212800   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212812   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.220452   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220474   59208 pod_ready.go:81] duration metric: took 7.650656ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.220482   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220489   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.251973   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.251997   59208 pod_ready.go:81] duration metric: took 31.499608ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.252008   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.252029   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.650914   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650940   59208 pod_ready.go:81] duration metric: took 398.904724ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.650948   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650954   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.050582   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050615   59208 pod_ready.go:81] duration metric: took 399.652069ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.050630   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050642   59208 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.450349   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450379   59208 pod_ready.go:81] duration metric: took 399.72875ms for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.450391   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450399   59208 pod_ready.go:38] duration metric: took 1.269776818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:14.450416   59208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:14.462296   59208 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:14.462318   59208 kubeadm.go:597] duration metric: took 8.38163922s to restartPrimaryControlPlane
	I0719 15:48:14.462329   59208 kubeadm.go:394] duration metric: took 8.441360513s to StartCluster
	I0719 15:48:14.462348   59208 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.462422   59208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:14.464082   59208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.464400   59208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:14.464459   59208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:14.464531   59208 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464570   59208 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.464581   59208 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:14.464592   59208 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464610   59208 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464636   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:14.464670   59208 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:14.464672   59208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-601445"
	W0719 15:48:14.464684   59208 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:14.464613   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.464740   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.465050   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465111   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465151   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465178   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465235   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.466230   59208 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:11.217150   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:11.217605   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:11.217634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:11.217561   60153 retry.go:31] will retry after 3.406637692s: waiting for machine to come up
	I0719 15:48:14.467899   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:14.481294   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0719 15:48:14.481538   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0719 15:48:14.481541   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0719 15:48:14.481658   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.482122   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482145   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482363   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482387   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482461   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482478   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482590   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482704   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482762   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482853   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.483131   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483159   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.483199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483217   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.486437   59208 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.486462   59208 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:14.486492   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.486893   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.486932   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.498388   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0719 15:48:14.498897   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0719 15:48:14.498952   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499251   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499660   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499678   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.499838   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499853   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.500068   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500168   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500232   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.500410   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.501505   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0719 15:48:14.501876   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.502391   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.502413   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.502456   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.502745   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.503006   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.503314   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.503341   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.505162   59208 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:14.505166   59208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:12.084791   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.582986   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.506465   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:14.506487   59208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:14.506506   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.506585   59208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.506604   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:14.506628   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.510227   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511092   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511134   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511207   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511231   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511257   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511370   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511390   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511570   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511574   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511662   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.511713   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511787   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511840   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.520612   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0719 15:48:14.521013   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.521451   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.521470   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.521817   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.522016   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.523622   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.523862   59208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.523876   59208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:14.523895   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.526426   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.526882   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.526941   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.527060   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.527190   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.527344   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.527439   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.674585   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:14.693700   59208 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:14.752990   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.856330   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:14.856350   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:14.884762   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:14.884784   59208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:14.895548   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.915815   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:14.915844   59208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:14.979442   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:15.098490   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098517   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098869   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.098893   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.098902   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.099141   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.099158   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.105078   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.105252   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.105506   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.105526   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.802868   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.802892   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803265   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803279   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.803285   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.803517   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803530   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803577   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.905945   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.905972   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906244   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906266   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906266   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.906275   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.906283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906484   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906496   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906511   59208 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:15.908671   59208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.910057   59208 addons.go:510] duration metric: took 1.445597408s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 15:48:16.697266   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:18.698379   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.627319   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:14.627800   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:14.627822   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:14.627767   60153 retry.go:31] will retry after 4.38444645s: waiting for machine to come up
	I0719 15:48:19.016073   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016711   58376 main.go:141] libmachine: (embed-certs-817144) Found IP for machine: 192.168.72.37
	I0719 15:48:19.016742   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has current primary IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016749   58376 main.go:141] libmachine: (embed-certs-817144) Reserving static IP address...
	I0719 15:48:19.017180   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.017204   58376 main.go:141] libmachine: (embed-certs-817144) Reserved static IP address: 192.168.72.37
	I0719 15:48:19.017222   58376 main.go:141] libmachine: (embed-certs-817144) DBG | skip adding static IP to network mk-embed-certs-817144 - found existing host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"}
	I0719 15:48:19.017239   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Getting to WaitForSSH function...
	I0719 15:48:19.017254   58376 main.go:141] libmachine: (embed-certs-817144) Waiting for SSH to be available...
	I0719 15:48:19.019511   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.019867   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.019896   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.020064   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH client type: external
	I0719 15:48:19.020080   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa (-rw-------)
	I0719 15:48:19.020107   58376 main.go:141] libmachine: (embed-certs-817144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:48:19.020115   58376 main.go:141] libmachine: (embed-certs-817144) DBG | About to run SSH command:
	I0719 15:48:19.020124   58376 main.go:141] libmachine: (embed-certs-817144) DBG | exit 0
	I0719 15:48:19.150328   58376 main.go:141] libmachine: (embed-certs-817144) DBG | SSH cmd err, output: <nil>: 
	I0719 15:48:19.150676   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetConfigRaw
	I0719 15:48:19.151317   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.154087   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154600   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.154634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154907   58376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:48:19.155143   58376 machine.go:94] provisionDockerMachine start ...
	I0719 15:48:19.155168   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:19.155369   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.157741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.158060   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158175   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.158368   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158618   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158769   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.158945   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.159144   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.159161   58376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:48:19.274836   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:48:19.274863   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275148   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:48:19.275174   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275373   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.278103   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278489   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.278518   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.278892   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279111   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279299   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.279577   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.279798   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.279815   58376 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-817144 && echo "embed-certs-817144" | sudo tee /etc/hostname
	I0719 15:48:19.413956   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-817144
	
	I0719 15:48:19.413988   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.416836   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.417196   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417408   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.417599   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417777   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417911   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.418083   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.418274   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.418290   58376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-817144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-817144/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-817144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:48:16.583538   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.083431   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.541400   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:48:19.541439   58376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:48:19.541464   58376 buildroot.go:174] setting up certificates
	I0719 15:48:19.541478   58376 provision.go:84] configureAuth start
	I0719 15:48:19.541495   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.541801   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.544209   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544579   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.544608   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544766   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.547206   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.547570   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547714   58376 provision.go:143] copyHostCerts
	I0719 15:48:19.547772   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:48:19.547782   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:48:19.547827   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:48:19.547939   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:48:19.547949   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:48:19.547969   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:48:19.548024   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:48:19.548031   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:48:19.548047   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:48:19.548093   58376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.embed-certs-817144 san=[127.0.0.1 192.168.72.37 embed-certs-817144 localhost minikube]
	I0719 15:48:20.024082   58376 provision.go:177] copyRemoteCerts
	I0719 15:48:20.024137   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:48:20.024157   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.026940   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027322   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.027358   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027541   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.027819   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.028011   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.028165   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.117563   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:48:20.144428   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:48:20.171520   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:48:20.195188   58376 provision.go:87] duration metric: took 653.6924ms to configureAuth
	I0719 15:48:20.195215   58376 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:48:20.195432   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:20.195518   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.198648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.198970   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.199007   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.199126   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.199335   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199527   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199687   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.199849   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.200046   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.200063   58376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:48:20.502753   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:48:20.502782   58376 machine.go:97] duration metric: took 1.347623735s to provisionDockerMachine
	I0719 15:48:20.502794   58376 start.go:293] postStartSetup for "embed-certs-817144" (driver="kvm2")
	I0719 15:48:20.502805   58376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:48:20.502821   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.503204   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:48:20.503248   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.506142   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.506563   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506697   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.506938   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.507125   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.507258   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.593356   58376 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:48:20.597843   58376 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:48:20.597877   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:48:20.597948   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:48:20.598048   58376 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:48:20.598164   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:48:20.607951   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:20.634860   58376 start.go:296] duration metric: took 132.043928ms for postStartSetup
	I0719 15:48:20.634900   58376 fix.go:56] duration metric: took 20.891722874s for fixHost
	I0719 15:48:20.634919   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.637846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638181   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.638218   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638439   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.638674   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.638884   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.639054   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.639256   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.639432   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.639444   58376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:48:20.755076   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404100.730818472
	
	I0719 15:48:20.755107   58376 fix.go:216] guest clock: 1721404100.730818472
	I0719 15:48:20.755115   58376 fix.go:229] Guest: 2024-07-19 15:48:20.730818472 +0000 UTC Remote: 2024-07-19 15:48:20.634903926 +0000 UTC m=+356.193225446 (delta=95.914546ms)
	I0719 15:48:20.755134   58376 fix.go:200] guest clock delta is within tolerance: 95.914546ms
	I0719 15:48:20.755139   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 21.011996674s
	I0719 15:48:20.755171   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.755465   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:20.758255   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758621   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.758644   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758861   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759348   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759545   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759656   58376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:48:20.759720   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.759780   58376 ssh_runner.go:195] Run: cat /version.json
	I0719 15:48:20.759802   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.762704   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.762833   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763161   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763202   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763399   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763493   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763545   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763608   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763693   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763772   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764001   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763996   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.764156   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764278   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.867430   58376 ssh_runner.go:195] Run: systemctl --version
	I0719 15:48:20.873463   58376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:21.029369   58376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:21.035953   58376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:21.036028   58376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:21.054352   58376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:21.054381   58376 start.go:495] detecting cgroup driver to use...
	I0719 15:48:21.054440   58376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:21.071903   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:21.088624   58376 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:21.088688   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:21.104322   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:21.120089   58376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:21.242310   58376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:21.422514   58376 docker.go:233] disabling docker service ...
	I0719 15:48:21.422589   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:21.439213   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:21.454361   58376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:21.577118   58376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:21.704150   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:21.719160   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:21.738765   58376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:21.738817   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.750720   58376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:21.750798   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.763190   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.775630   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.787727   58376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:21.799520   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.812016   58376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.830564   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.841770   58376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:21.851579   58376 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:21.851651   58376 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:21.864529   58376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:21.874301   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:21.994669   58376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:22.131448   58376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:22.131521   58376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:22.137328   58376 start.go:563] Will wait 60s for crictl version
	I0719 15:48:22.137391   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:48:22.141409   58376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:22.182947   58376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:22.183029   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.217804   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.252450   58376 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.197350   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:22.197536   59208 node_ready.go:49] node "default-k8s-diff-port-601445" has status "Ready":"True"
	I0719 15:48:22.197558   59208 node_ready.go:38] duration metric: took 7.503825721s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:22.197568   59208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:22.203380   59208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:24.211899   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:22.253862   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:22.256397   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256763   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:22.256791   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256968   58376 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:22.261184   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:22.274804   58376 kubeadm.go:883] updating cluster {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:22.274936   58376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:22.274994   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:22.317501   58376 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:22.317559   58376 ssh_runner.go:195] Run: which lz4
	I0719 15:48:22.321646   58376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:22.326455   58376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:22.326478   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:23.820083   58376 crio.go:462] duration metric: took 1.498469232s to copy over tarball
	I0719 15:48:23.820155   58376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:48:21.583230   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.585191   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.710838   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.786269   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:26.105248   58376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285062307s)
	I0719 15:48:26.105271   58376 crio.go:469] duration metric: took 2.285164513s to extract the tarball
	I0719 15:48:26.105279   58376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:26.142811   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:26.185631   58376 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:26.185660   58376 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:26.185668   58376 kubeadm.go:934] updating node { 192.168.72.37 8443 v1.30.3 crio true true} ...
	I0719 15:48:26.185784   58376 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-817144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:26.185857   58376 ssh_runner.go:195] Run: crio config
	I0719 15:48:26.238150   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:26.238172   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:26.238183   58376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:26.238211   58376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-817144 NodeName:embed-certs-817144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:26.238449   58376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-817144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:26.238515   58376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:26.249200   58376 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:26.249278   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:26.258710   58376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 15:48:26.279235   58376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:26.299469   58376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 15:48:26.317789   58376 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:26.321564   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:26.333153   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:26.452270   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:26.469344   58376 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144 for IP: 192.168.72.37
	I0719 15:48:26.469366   58376 certs.go:194] generating shared ca certs ...
	I0719 15:48:26.469382   58376 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:26.469530   58376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:26.469586   58376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:26.469601   58376 certs.go:256] generating profile certs ...
	I0719 15:48:26.469694   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/client.key
	I0719 15:48:26.469791   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key.928d4c24
	I0719 15:48:26.469846   58376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key
	I0719 15:48:26.469982   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:26.470021   58376 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:26.470035   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:26.470071   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:26.470105   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:26.470140   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:26.470197   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:26.470812   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:26.508455   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:26.537333   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:26.565167   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:26.601152   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 15:48:26.636408   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:26.669076   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:26.695438   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:26.718897   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:26.741760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:26.764760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:26.787772   58376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:26.807332   58376 ssh_runner.go:195] Run: openssl version
	I0719 15:48:26.815182   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:26.827373   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831926   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831973   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.837923   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:26.849158   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:26.860466   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865178   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865249   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.870873   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:26.882044   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:26.893283   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897750   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897809   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.903395   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:26.914389   58376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:26.918904   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:26.924659   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:26.930521   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:26.936808   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:26.942548   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:26.948139   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:26.954557   58376 kubeadm.go:392] StartCluster: {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:26.954644   58376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:26.954722   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:26.994129   58376 cri.go:89] found id: ""
	I0719 15:48:26.994205   58376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:27.006601   58376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:27.006624   58376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:27.006699   58376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:27.017166   58376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:27.018580   58376 kubeconfig.go:125] found "embed-certs-817144" server: "https://192.168.72.37:8443"
	I0719 15:48:27.021622   58376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:27.033000   58376 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.37
	I0719 15:48:27.033033   58376 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:27.033044   58376 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:27.033083   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:27.073611   58376 cri.go:89] found id: ""
	I0719 15:48:27.073678   58376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:27.092986   58376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:27.103557   58376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:27.103580   58376 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:27.103636   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:48:27.113687   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:27.113752   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:27.123696   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:48:27.132928   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:27.132984   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:27.142566   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.152286   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:27.152335   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.161701   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:48:27.171532   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:27.171591   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:27.181229   58376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:27.192232   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:27.330656   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.287561   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.513476   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.616308   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.704518   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:28.704605   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.205265   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.082992   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.746255   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.704706   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.204728   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.221741   58376 api_server.go:72] duration metric: took 1.517220815s to wait for apiserver process to appear ...
	I0719 15:48:30.221766   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:30.221786   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.665104   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.665138   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.665152   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.703238   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.703271   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.722495   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.748303   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:32.748344   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.222861   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.227076   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.227104   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.722705   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.734658   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.734683   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:34.222279   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:34.227870   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:48:34.233621   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:34.233646   58376 api_server.go:131] duration metric: took 4.011873202s to wait for apiserver health ...
	I0719 15:48:34.233656   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:34.233664   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:34.235220   58376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:30.210533   59208 pod_ready.go:92] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.210557   59208 pod_ready.go:81] duration metric: took 8.007151724s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.210568   59208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215669   59208 pod_ready.go:92] pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.215692   59208 pod_ready.go:81] duration metric: took 5.116005ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215702   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222633   59208 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.222655   59208 pod_ready.go:81] duration metric: took 6.947228ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222664   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227631   59208 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.227656   59208 pod_ready.go:81] duration metric: took 4.985227ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227667   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405047   59208 pod_ready.go:92] pod "kube-proxy-r7b2z" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.405073   59208 pod_ready.go:81] duration metric: took 177.397954ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405085   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805843   59208 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.805877   59208 pod_ready.go:81] duration metric: took 400.783803ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805890   59208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:32.821231   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.236303   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:34.248133   58376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:34.270683   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:34.279907   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:34.279939   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:34.279946   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:34.279953   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:34.279960   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:34.279966   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:34.279972   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:34.279977   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:34.279982   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:34.279988   58376 system_pods.go:74] duration metric: took 9.282886ms to wait for pod list to return data ...
	I0719 15:48:34.279995   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:34.283597   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:34.283623   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:34.283634   58376 node_conditions.go:105] duration metric: took 3.634999ms to run NodePressure ...
	I0719 15:48:34.283649   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:31.082803   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:33.583510   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.586116   58376 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590095   58376 kubeadm.go:739] kubelet initialised
	I0719 15:48:34.590119   58376 kubeadm.go:740] duration metric: took 3.977479ms waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590128   58376 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:34.594987   58376 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.600192   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600212   58376 pod_ready.go:81] duration metric: took 5.205124ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.600220   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600225   58376 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.603934   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603952   58376 pod_ready.go:81] duration metric: took 3.719853ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.603959   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603965   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.607778   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607803   58376 pod_ready.go:81] duration metric: took 3.830174ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.607817   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607826   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.673753   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673775   58376 pod_ready.go:81] duration metric: took 65.937586ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.673783   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673788   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.075506   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075539   58376 pod_ready.go:81] duration metric: took 401.743578ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.075548   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075554   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.474518   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474546   58376 pod_ready.go:81] duration metric: took 398.985628ms for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.474558   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474567   58376 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.874540   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874567   58376 pod_ready.go:81] duration metric: took 399.989978ms for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.874576   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874582   58376 pod_ready.go:38] duration metric: took 1.284443879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:35.874646   58376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:35.886727   58376 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:35.886751   58376 kubeadm.go:597] duration metric: took 8.880120513s to restartPrimaryControlPlane
	I0719 15:48:35.886760   58376 kubeadm.go:394] duration metric: took 8.932210528s to StartCluster
	I0719 15:48:35.886781   58376 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.886859   58376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:35.888389   58376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.888642   58376 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:35.888722   58376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:35.888781   58376 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-817144"
	I0719 15:48:35.888810   58376 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-817144"
	I0719 15:48:35.888824   58376 addons.go:69] Setting default-storageclass=true in profile "embed-certs-817144"
	I0719 15:48:35.888839   58376 addons.go:69] Setting metrics-server=true in profile "embed-certs-817144"
	I0719 15:48:35.888875   58376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-817144"
	I0719 15:48:35.888888   58376 addons.go:234] Setting addon metrics-server=true in "embed-certs-817144"
	W0719 15:48:35.888897   58376 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:35.888931   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.888840   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0719 15:48:35.888843   58376 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:35.889000   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.889231   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889242   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889247   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889270   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889272   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889282   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.890641   58376 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:35.892144   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:35.905134   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0719 15:48:35.905572   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.905788   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0719 15:48:35.906107   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906132   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.906171   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.906496   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.906825   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906846   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.907126   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.907179   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.907215   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.907289   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.908269   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0719 15:48:35.908747   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.909343   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.909367   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.909787   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.910337   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910382   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.910615   58376 addons.go:234] Setting addon default-storageclass=true in "embed-certs-817144"
	W0719 15:48:35.910632   58376 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:35.910662   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.910937   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910965   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.926165   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 15:48:35.926905   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.926944   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0719 15:48:35.927369   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.927573   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927636   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927829   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927847   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927959   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928512   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.928551   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.928759   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928824   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 15:48:35.928964   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.929176   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.929546   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.929557   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.929927   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.930278   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.931161   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.931773   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.933234   58376 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:35.933298   58376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:35.934543   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:35.934556   58376 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:35.934569   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.934629   58376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:35.934642   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:35.934657   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.938300   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938628   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.938648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938679   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939150   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939340   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.939433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.939479   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939536   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.939619   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939673   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.939937   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.940081   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.940190   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.947955   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0719 15:48:35.948206   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.948643   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.948654   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.948961   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.949119   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.950572   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.951770   58376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:35.951779   58376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:35.951791   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.957009   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957381   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.957405   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957550   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.957717   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.957841   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.957953   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:36.072337   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:36.091547   58376 node_ready.go:35] waiting up to 6m0s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:36.182328   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:36.195704   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:36.195729   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:36.221099   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:36.224606   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:36.224632   58376 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:36.247264   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:36.247289   58376 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:36.300365   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:37.231670   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010526005s)
	I0719 15:48:37.231729   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231743   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.231765   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049406285s)
	I0719 15:48:37.231807   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231822   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232034   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232085   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232096   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.232100   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232105   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.232115   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232345   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232366   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233486   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233529   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233541   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.233549   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.233792   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233815   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233832   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.240487   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.240502   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.240732   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.240754   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.240755   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288064   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288085   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288370   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288389   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288378   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288400   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288406   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288595   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288606   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288652   58376 addons.go:475] Verifying addon metrics-server=true in "embed-certs-817144"
	I0719 15:48:37.290497   58376 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.314792   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.814653   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.291961   58376 addons.go:510] duration metric: took 1.403238435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:48:38.096793   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.584345   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.585215   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.818959   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.313745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:44.314213   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:40.596246   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.095976   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.595640   58376 node_ready.go:49] node "embed-certs-817144" has status "Ready":"True"
	I0719 15:48:43.595659   58376 node_ready.go:38] duration metric: took 7.504089345s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:43.595667   58376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:43.600832   58376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605878   58376 pod_ready.go:92] pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.605900   58376 pod_ready.go:81] duration metric: took 5.046391ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605912   58376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610759   58376 pod_ready.go:92] pod "etcd-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.610778   58376 pod_ready.go:81] duration metric: took 4.85915ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610788   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615239   58376 pod_ready.go:92] pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.615257   58376 pod_ready.go:81] duration metric: took 4.46126ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615267   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619789   58376 pod_ready.go:92] pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.619804   58376 pod_ready.go:81] duration metric: took 4.530085ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619814   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998585   58376 pod_ready.go:92] pod "kube-proxy-4d4g9" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.998612   58376 pod_ready.go:81] duration metric: took 378.78761ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998622   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:40.084033   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.582983   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.812904   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.313178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:46.004415   58376 pod_ready.go:102] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.006304   58376 pod_ready.go:92] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:48.006329   58376 pod_ready.go:81] duration metric: took 4.00769937s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:48.006339   58376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:45.082973   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:47.582224   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.582782   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:51.814049   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:53.815503   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:50.015637   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:52.515491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:51.583726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.083179   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.816000   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.817771   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:55.014213   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.014730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:56.083381   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.088572   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:00.313552   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:02.812079   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:59.513087   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:01.514094   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.013514   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:00.583159   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:03.082968   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:05.312525   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.812891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:06.013654   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:08.015552   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:05.083931   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.583371   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:09.824389   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.312960   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.512671   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.513359   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.082891   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:14.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:14.813090   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.311701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:15.014386   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.513993   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:16.584566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.082569   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:19.814129   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.814762   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:23.817102   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:20.012767   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:22.512467   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.587074   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:24.082829   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:26.312496   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.312687   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.015437   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:27.514515   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:26.084854   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.584103   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:30.313153   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:32.812075   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:29.514963   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.515163   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.014174   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.083793   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:33.083838   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:34.812542   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.311929   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.312244   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:36.513892   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.013261   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:35.084098   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.587696   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:41.313207   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.815916   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:41.013495   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.513445   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:40.082726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:42.583599   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.584503   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:46.313534   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.811536   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:46.012299   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.515396   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:47.082848   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:49.083291   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.813781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:52.817124   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.516602   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.012716   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:51.083390   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.583030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:55.312032   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.813778   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:55.013719   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.014070   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:56.083506   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:58.582593   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:59.815894   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.312541   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.513158   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.013500   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:00.583268   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:03.082967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:04.814326   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.314104   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:04.513144   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.013900   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.014269   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.582967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.583076   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.583550   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:09.813831   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.815120   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.815551   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.512872   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.514351   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.584717   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.082745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.815701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:17.816052   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.012834   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.014504   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.582156   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.583011   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:20.312912   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:22.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:20.513572   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.014103   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:21.082689   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.583483   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:25.312127   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.312599   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.512955   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.515102   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.583597   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:28.083843   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:29.815683   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.312009   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.312309   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.013332   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.013381   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.082937   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:36.812745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.312184   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.513321   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:36.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.012035   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:35.084310   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:37.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:41.313263   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.816257   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:41.014458   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.017012   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:40.083591   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:42.582246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:44.582857   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:46.312320   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.312805   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.512849   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.013822   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:46.582906   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.583537   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:50.815488   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.312626   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:50.013996   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:52.514493   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:51.082358   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.582566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:50:55.814460   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.313739   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:55.014039   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:57.513248   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:56.082876   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.583172   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:00.812445   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.813629   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.011751   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.013062   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.013473   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.584028   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:03.082149   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:05.312865   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.816945   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:06.513634   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.012283   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:05.084185   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.583429   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.583944   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:10.315941   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:12.812732   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.013749   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.513338   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.584335   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:14.083745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:15.311404   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:17.312317   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.013193   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:18.014317   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.583403   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.082807   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.812659   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.813178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.311781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:20.512610   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:22.512707   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.083030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:23.583501   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:26.312416   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.313406   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.513171   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:27.012377   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:29.014890   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.583785   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.083633   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:30.811822   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.813013   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:31.512155   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.012636   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:30.083916   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.582845   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.582945   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.313638   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:37.813400   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.013415   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.513387   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.583140   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:39.084770   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:40.312909   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:42.812703   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.011956   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:43.513117   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.584336   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.082447   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.813328   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:47.318119   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.013597   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.513037   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.083435   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.582222   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:51:49.811847   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:51.812747   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.312028   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.514497   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:53.012564   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.585244   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:52.587963   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.576923   58417 pod_ready.go:81] duration metric: took 4m0.000887015s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	E0719 15:51:54.576954   58417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 15:51:54.576979   58417 pod_ready.go:38] duration metric: took 4m10.045017696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:51:54.577013   58417 kubeadm.go:597] duration metric: took 4m18.572474217s to restartPrimaryControlPlane
	W0719 15:51:54.577075   58417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:54.577107   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:56.314112   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:58.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:55.012915   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:57.512491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:01.312620   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:03.812880   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:59.512666   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:02.013784   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.314545   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:08.811891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:04.512583   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:09.016808   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:10.813197   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:13.313167   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:11.513329   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:14.012352   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:15.812105   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:17.812843   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:16.014362   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:18.513873   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:20.685347   58417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.108209289s)
	I0719 15:52:20.685431   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:20.699962   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:20.709728   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:20.719022   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:20.719038   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:52:20.719074   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:52:20.727669   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:52:20.727731   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:52:20.736851   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:52:20.745821   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:52:20.745867   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:52:20.755440   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.764307   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:52:20.764360   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.773759   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:52:20.782354   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:52:20.782420   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:52:20.791186   58417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:20.837700   58417 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 15:52:20.837797   58417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:52:20.958336   58417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:20.958486   58417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:20.958629   58417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 15:52:20.967904   58417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:20.969995   58417 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:20.970097   58417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:52:20.970197   58417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:20.970325   58417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:52:20.970438   58417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:52:20.970550   58417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:52:20.970633   58417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:52:20.970740   58417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:52:20.970840   58417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:52:20.970949   58417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:52:20.971049   58417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:52:20.971106   58417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:52:20.971184   58417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:21.175226   58417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:21.355994   58417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 15:52:21.453237   58417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:21.569014   58417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:21.672565   58417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:21.673036   58417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:21.675860   58417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:20.312428   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:22.312770   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:24.314183   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.013099   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:23.512341   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.677594   58417 out.go:204]   - Booting up control plane ...
	I0719 15:52:21.677694   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:21.677787   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:21.677894   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:21.695474   58417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:21.701352   58417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:21.701419   58417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:52:21.831941   58417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 15:52:21.832046   58417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 15:52:22.333073   58417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.399393ms
	I0719 15:52:22.333184   58417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 15:52:27.336964   58417 kubeadm.go:310] [api-check] The API server is healthy after 5.002306078s
	I0719 15:52:27.348152   58417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:27.366916   58417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:27.396214   58417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:27.396475   58417 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-382231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:27.408607   58417 kubeadm.go:310] [bootstrap-token] Using token: xdoy2n.29347ekmgral9ki3
	I0719 15:52:27.409857   58417 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:27.409991   58417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:27.415553   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:27.424772   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:27.428421   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:27.439922   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:27.443985   58417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:27.742805   58417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:28.253742   58417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 15:52:28.744380   58417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 15:52:28.744405   58417 kubeadm.go:310] 
	I0719 15:52:28.744486   58417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:28.744498   58417 kubeadm.go:310] 
	I0719 15:52:28.744581   58417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:28.744588   58417 kubeadm.go:310] 
	I0719 15:52:28.744633   58417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 15:52:28.744704   58417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:28.744783   58417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:28.744794   58417 kubeadm.go:310] 
	I0719 15:52:28.744877   58417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 15:52:28.744891   58417 kubeadm.go:310] 
	I0719 15:52:28.744944   58417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:28.744951   58417 kubeadm.go:310] 
	I0719 15:52:28.744992   58417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 15:52:28.745082   58417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:28.745172   58417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:28.745181   58417 kubeadm.go:310] 
	I0719 15:52:28.745253   58417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:28.745319   58417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 15:52:28.745332   58417 kubeadm.go:310] 
	I0719 15:52:28.745412   58417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745499   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 15:52:28.745518   58417 kubeadm.go:310] 	--control-plane 
	I0719 15:52:28.745525   58417 kubeadm.go:310] 
	I0719 15:52:28.745599   58417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:28.745609   58417 kubeadm.go:310] 
	I0719 15:52:28.745677   58417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745778   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 15:52:28.747435   58417 kubeadm.go:310] W0719 15:52:20.814208    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747697   58417 kubeadm.go:310] W0719 15:52:20.814905    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747795   58417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:28.747815   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:52:28.747827   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:52:28.749619   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:26.813409   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.814040   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:25.513048   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:27.514730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.750992   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:28.762976   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:52:28.783894   58417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:28.783972   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.783989   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-382231 minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=no-preload-382231 minikube.k8s.io/primary=true
	I0719 15:52:28.808368   58417 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:29.005658   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.505702   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.005765   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.505834   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.005837   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.506329   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.006419   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.505701   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.005735   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.130121   58417 kubeadm.go:1113] duration metric: took 4.346215264s to wait for elevateKubeSystemPrivileges
	I0719 15:52:33.130162   58417 kubeadm.go:394] duration metric: took 4m57.173876302s to StartCluster
	I0719 15:52:33.130187   58417 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.130290   58417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:52:33.131944   58417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.132178   58417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:52:33.132237   58417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:52:33.132339   58417 addons.go:69] Setting storage-provisioner=true in profile "no-preload-382231"
	I0719 15:52:33.132358   58417 addons.go:69] Setting default-storageclass=true in profile "no-preload-382231"
	I0719 15:52:33.132381   58417 addons.go:234] Setting addon storage-provisioner=true in "no-preload-382231"
	I0719 15:52:33.132385   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0719 15:52:33.132391   58417 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:52:33.132392   58417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-382231"
	I0719 15:52:33.132419   58417 addons.go:69] Setting metrics-server=true in profile "no-preload-382231"
	I0719 15:52:33.132423   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132444   58417 addons.go:234] Setting addon metrics-server=true in "no-preload-382231"
	W0719 15:52:33.132452   58417 addons.go:243] addon metrics-server should already be in state true
	I0719 15:52:33.132474   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132740   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132763   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132799   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132810   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132822   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132829   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.134856   58417 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:33.136220   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:33.149028   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0719 15:52:33.149128   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0719 15:52:33.149538   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.149646   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.150093   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150108   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150111   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150119   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150477   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150603   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150955   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.150971   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.151326   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 15:52:33.151359   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.151715   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.152199   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.152223   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.152574   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.153136   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.153170   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.155187   58417 addons.go:234] Setting addon default-storageclass=true in "no-preload-382231"
	W0719 15:52:33.155207   58417 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:52:33.155235   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.155572   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.155602   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.170886   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0719 15:52:33.170884   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 15:52:33.171439   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.171510   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0719 15:52:33.171543   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172005   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172026   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172109   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172141   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172162   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172538   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172552   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172609   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172775   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.172831   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172875   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.173021   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.173381   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.173405   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.175118   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.175500   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.177023   58417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:52:33.177041   58417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:33.178348   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:33.178362   58417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:33.178377   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.178450   58417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.178469   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:52:33.178486   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.182287   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182598   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.182617   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182741   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.182948   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.183074   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.183204   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.183372   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183940   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.183959   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183994   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.184237   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.184356   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.184505   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.191628   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 15:52:33.191984   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.192366   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.192385   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.192707   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.192866   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.194285   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.194485   58417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.194499   58417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:33.194514   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.197526   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.197853   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.197872   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.198087   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.198335   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.198472   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.198604   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.382687   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:52:33.403225   58417 node_ready.go:35] waiting up to 6m0s for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430507   58417 node_ready.go:49] node "no-preload-382231" has status "Ready":"True"
	I0719 15:52:33.430535   58417 node_ready.go:38] duration metric: took 27.282654ms for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430546   58417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:33.482352   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.555210   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.565855   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:33.565874   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:52:33.571653   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.609541   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:33.609569   58417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:33.674428   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:33.674455   58417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:33.746703   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:34.092029   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092051   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092341   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092359   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.092369   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092379   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092604   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092628   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.092634   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.093766   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.093785   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094025   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094043   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094076   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.094088   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094325   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094343   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094349   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128393   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.128412   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.128715   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128766   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.128775   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.319737   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.319764   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320141   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320161   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320165   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.320184   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.320199   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320441   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320462   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320475   58417 addons.go:475] Verifying addon metrics-server=true in "no-preload-382231"
	I0719 15:52:34.320482   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.322137   58417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:52:30.812091   59208 pod_ready.go:81] duration metric: took 4m0.006187238s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:30.812113   59208 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:30.812120   59208 pod_ready.go:38] duration metric: took 4m8.614544303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.812135   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:30.812161   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:30.812208   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:30.861054   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:30.861074   59208 cri.go:89] found id: ""
	I0719 15:52:30.861083   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:30.861144   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.865653   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:30.865708   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:30.900435   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:30.900459   59208 cri.go:89] found id: ""
	I0719 15:52:30.900468   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:30.900512   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.904686   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:30.904747   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:30.950618   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.950638   59208 cri.go:89] found id: ""
	I0719 15:52:30.950646   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:30.950691   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.955080   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:30.955147   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:30.996665   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:30.996691   59208 cri.go:89] found id: ""
	I0719 15:52:30.996704   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:30.996778   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.001122   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:31.001191   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:31.042946   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.042969   59208 cri.go:89] found id: ""
	I0719 15:52:31.042979   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:31.043039   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.047311   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:31.047365   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:31.086140   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.086166   59208 cri.go:89] found id: ""
	I0719 15:52:31.086175   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:31.086230   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.091742   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:31.091818   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:31.134209   59208 cri.go:89] found id: ""
	I0719 15:52:31.134241   59208 logs.go:276] 0 containers: []
	W0719 15:52:31.134252   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:31.134260   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:31.134316   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:31.173297   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.173325   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.173331   59208 cri.go:89] found id: ""
	I0719 15:52:31.173353   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:31.173414   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.177951   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.182099   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:31.182121   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:31.196541   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:31.196565   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:31.322528   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:31.322555   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:31.369628   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:31.369658   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:31.417834   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:31.417867   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:31.459116   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:31.459145   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.500986   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:31.501018   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.578557   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:31.578606   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:31.635053   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:31.635082   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:31.692604   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:31.692635   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.729765   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:31.729801   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.766152   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:31.766177   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:32.301240   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:32.301278   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.013083   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:32.013142   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:34.323358   58417 addons.go:510] duration metric: took 1.19112329s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:34.849019   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:34.866751   59208 api_server.go:72] duration metric: took 4m20.402312557s to wait for apiserver process to appear ...
	I0719 15:52:34.866779   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:34.866816   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:34.866876   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:34.905505   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.905532   59208 cri.go:89] found id: ""
	I0719 15:52:34.905542   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:34.905609   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.910996   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:34.911069   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:34.958076   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:34.958100   59208 cri.go:89] found id: ""
	I0719 15:52:34.958110   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:34.958166   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.962439   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:34.962507   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:34.999095   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:34.999117   59208 cri.go:89] found id: ""
	I0719 15:52:34.999126   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:34.999178   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.003785   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:35.003848   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:35.042585   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.042613   59208 cri.go:89] found id: ""
	I0719 15:52:35.042622   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:35.042683   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.048705   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:35.048770   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:35.092408   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.092435   59208 cri.go:89] found id: ""
	I0719 15:52:35.092444   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:35.092499   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.096983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:35.097050   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:35.135694   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.135717   59208 cri.go:89] found id: ""
	I0719 15:52:35.135726   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:35.135782   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.140145   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:35.140223   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:35.178912   59208 cri.go:89] found id: ""
	I0719 15:52:35.178938   59208 logs.go:276] 0 containers: []
	W0719 15:52:35.178948   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:35.178955   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:35.179015   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:35.229067   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.229090   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.229104   59208 cri.go:89] found id: ""
	I0719 15:52:35.229112   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:35.229172   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.234985   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.240098   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:35.240120   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:35.299418   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:35.299449   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:35.316294   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:35.316330   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:35.433573   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:35.433610   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:35.479149   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:35.479181   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:35.526270   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:35.526299   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.564209   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:35.564241   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.601985   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:35.602020   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.669986   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:35.670015   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.711544   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:35.711580   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:35.763800   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:35.763831   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:35.822699   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:35.822732   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.863377   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:35.863422   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:38.777749   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:52:38.781984   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:52:38.782935   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:38.782955   59208 api_server.go:131] duration metric: took 3.916169938s to wait for apiserver health ...
	I0719 15:52:38.782963   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:38.782983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:38.783026   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:38.818364   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:38.818387   59208 cri.go:89] found id: ""
	I0719 15:52:38.818395   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:38.818442   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.823001   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:38.823054   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:38.857871   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:38.857900   59208 cri.go:89] found id: ""
	I0719 15:52:38.857909   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:38.857958   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.864314   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:38.864375   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:38.910404   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:38.910434   59208 cri.go:89] found id: ""
	I0719 15:52:38.910445   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:38.910505   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.915588   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:38.915645   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:38.952981   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:38.953002   59208 cri.go:89] found id: ""
	I0719 15:52:38.953009   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:38.953055   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.957397   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:38.957447   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:39.002973   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.003001   59208 cri.go:89] found id: ""
	I0719 15:52:39.003011   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:39.003059   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.007496   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:39.007568   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:39.045257   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.045282   59208 cri.go:89] found id: ""
	I0719 15:52:39.045291   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:39.045351   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.049358   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:39.049415   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:39.083263   59208 cri.go:89] found id: ""
	I0719 15:52:39.083303   59208 logs.go:276] 0 containers: []
	W0719 15:52:39.083314   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:39.083321   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:39.083391   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:39.121305   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.121348   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.121354   59208 cri.go:89] found id: ""
	I0719 15:52:39.121363   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:39.121421   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.126259   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.130395   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:39.130413   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:39.171213   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:39.171239   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.206545   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:39.206577   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:39.267068   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:39.267105   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:39.373510   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:39.373544   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.512374   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.012559   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:39.013766   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:35.495479   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.989424   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:38.489746   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.489775   58417 pod_ready.go:81] duration metric: took 5.007393051s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.489790   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495855   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.495884   58417 pod_ready.go:81] duration metric: took 6.085398ms for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495895   58417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:40.502651   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:41.503286   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.503309   58417 pod_ready.go:81] duration metric: took 3.007406201s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.503321   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513225   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.513245   58417 pod_ready.go:81] duration metric: took 9.916405ms for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513256   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517651   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.517668   58417 pod_ready.go:81] duration metric: took 4.40518ms for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517677   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522529   58417 pod_ready.go:92] pod "kube-proxy-qd84x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.522544   58417 pod_ready.go:81] duration metric: took 4.861257ms for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522551   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687964   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.687987   58417 pod_ready.go:81] duration metric: took 165.428951ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687997   58417 pod_ready.go:38] duration metric: took 8.257437931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:41.688016   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:41.688069   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:41.705213   58417 api_server.go:72] duration metric: took 8.573000368s to wait for apiserver process to appear ...
	I0719 15:52:41.705236   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:41.705256   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:52:41.709425   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:52:41.710427   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:52:41.710447   58417 api_server.go:131] duration metric: took 5.203308ms to wait for apiserver health ...
	I0719 15:52:41.710455   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:41.890063   58417 system_pods.go:59] 9 kube-system pods found
	I0719 15:52:41.890091   58417 system_pods.go:61] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:41.890095   58417 system_pods.go:61] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:41.890099   58417 system_pods.go:61] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:41.890103   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:41.890106   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:41.890109   58417 system_pods.go:61] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:41.890112   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:41.890117   58417 system_pods.go:61] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:41.890121   58417 system_pods.go:61] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:41.890128   58417 system_pods.go:74] duration metric: took 179.666477ms to wait for pod list to return data ...
	I0719 15:52:41.890135   58417 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.086946   58417 default_sa.go:45] found service account: "default"
	I0719 15:52:42.086973   58417 default_sa.go:55] duration metric: took 196.832888ms for default service account to be created ...
	I0719 15:52:42.086984   58417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.289457   58417 system_pods.go:86] 9 kube-system pods found
	I0719 15:52:42.289483   58417 system_pods.go:89] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:42.289489   58417 system_pods.go:89] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:42.289493   58417 system_pods.go:89] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:42.289498   58417 system_pods.go:89] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:42.289502   58417 system_pods.go:89] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:42.289506   58417 system_pods.go:89] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:42.289510   58417 system_pods.go:89] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:42.289518   58417 system_pods.go:89] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.289523   58417 system_pods.go:89] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:42.289530   58417 system_pods.go:126] duration metric: took 202.54151ms to wait for k8s-apps to be running ...
	I0719 15:52:42.289536   58417 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.289575   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.304866   58417 system_svc.go:56] duration metric: took 15.319153ms WaitForService to wait for kubelet
	I0719 15:52:42.304931   58417 kubeadm.go:582] duration metric: took 9.172718104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.304958   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.488087   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.488108   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.488122   58417 node_conditions.go:105] duration metric: took 183.159221ms to run NodePressure ...
	I0719 15:52:42.488135   58417 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.488144   58417 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.488157   58417 start.go:255] writing updated cluster config ...
	I0719 15:52:42.488453   58417 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.536465   58417 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:52:42.538606   58417 out.go:177] * Done! kubectl is now configured to use "no-preload-382231" cluster and "default" namespace by default
	I0719 15:52:39.422000   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:39.422034   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:39.473826   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:39.473860   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:39.515998   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:39.516023   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:39.559475   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:39.559506   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:39.574174   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:39.574205   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.615906   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:39.615933   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.676764   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:39.676795   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.714437   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:39.714467   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:42.584088   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:42.584114   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.584119   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.584123   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.584127   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.584130   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.584133   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.584138   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.584143   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.584150   59208 system_pods.go:74] duration metric: took 3.801182741s to wait for pod list to return data ...
	I0719 15:52:42.584156   59208 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.586910   59208 default_sa.go:45] found service account: "default"
	I0719 15:52:42.586934   59208 default_sa.go:55] duration metric: took 2.771722ms for default service account to be created ...
	I0719 15:52:42.586943   59208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.593611   59208 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:42.593634   59208 system_pods.go:89] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.593639   59208 system_pods.go:89] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.593645   59208 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.593650   59208 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.593654   59208 system_pods.go:89] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.593658   59208 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.593669   59208 system_pods.go:89] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.593673   59208 system_pods.go:89] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.593680   59208 system_pods.go:126] duration metric: took 6.731347ms to wait for k8s-apps to be running ...
	I0719 15:52:42.593687   59208 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.593726   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.615811   59208 system_svc.go:56] duration metric: took 22.114487ms WaitForService to wait for kubelet
	I0719 15:52:42.615841   59208 kubeadm.go:582] duration metric: took 4m28.151407807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.615864   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.619021   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.619040   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.619050   59208 node_conditions.go:105] duration metric: took 3.180958ms to run NodePressure ...
	I0719 15:52:42.619060   59208 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.619067   59208 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.619079   59208 start.go:255] writing updated cluster config ...
	I0719 15:52:42.619329   59208 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.677117   59208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:42.679317   59208 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-601445" cluster and "default" namespace by default
	I0719 15:52:41.514013   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:44.012173   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:46.013717   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:48.013121   58376 pod_ready.go:81] duration metric: took 4m0.006772624s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:48.013143   58376 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:48.013150   58376 pod_ready.go:38] duration metric: took 4m4.417474484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:48.013165   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:48.013194   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:48.013234   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:48.067138   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.067166   58376 cri.go:89] found id: ""
	I0719 15:52:48.067175   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:48.067218   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.071486   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:48.071531   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:48.115491   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.115514   58376 cri.go:89] found id: ""
	I0719 15:52:48.115525   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:48.115583   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.119693   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:48.119750   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:48.161158   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.161185   58376 cri.go:89] found id: ""
	I0719 15:52:48.161194   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:48.161257   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.165533   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:48.165584   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:48.207507   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.207528   58376 cri.go:89] found id: ""
	I0719 15:52:48.207537   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:48.207596   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.212070   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:48.212145   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:48.250413   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.250441   58376 cri.go:89] found id: ""
	I0719 15:52:48.250451   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:48.250510   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.255025   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:48.255095   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:48.289898   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.289922   58376 cri.go:89] found id: ""
	I0719 15:52:48.289930   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:48.289976   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.294440   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:48.294489   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:48.329287   58376 cri.go:89] found id: ""
	I0719 15:52:48.329314   58376 logs.go:276] 0 containers: []
	W0719 15:52:48.329326   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:48.329332   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:48.329394   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:48.373215   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.373242   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.373248   58376 cri.go:89] found id: ""
	I0719 15:52:48.373257   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:48.373311   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.377591   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.381610   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:48.381635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:48.440106   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:48.440148   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:48.455200   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:48.455234   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.496729   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:48.496757   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.535475   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:48.535501   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.592954   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:48.592993   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.635925   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:48.635957   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.671611   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:48.671642   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:48.809648   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:48.809681   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.863327   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:48.863361   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.902200   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:48.902245   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.937497   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:48.937525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:49.446900   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:49.446933   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:51.988535   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:52.005140   58376 api_server.go:72] duration metric: took 4m16.116469116s to wait for apiserver process to appear ...
	I0719 15:52:52.005165   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:52.005206   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:52.005258   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:52.041113   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.041143   58376 cri.go:89] found id: ""
	I0719 15:52:52.041150   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:52.041199   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.045292   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:52.045349   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:52.086747   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.086770   58376 cri.go:89] found id: ""
	I0719 15:52:52.086778   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:52.086821   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.091957   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:52.092015   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:52.128096   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.128128   58376 cri.go:89] found id: ""
	I0719 15:52:52.128138   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:52.128204   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.132889   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:52.132949   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:52.168359   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.168389   58376 cri.go:89] found id: ""
	I0719 15:52:52.168398   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:52.168454   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.172577   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:52.172639   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:52.211667   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.211684   58376 cri.go:89] found id: ""
	I0719 15:52:52.211691   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:52.211740   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.215827   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:52.215893   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:52.252105   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.252130   58376 cri.go:89] found id: ""
	I0719 15:52:52.252140   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:52.252194   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.256407   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:52.256464   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:52.292646   58376 cri.go:89] found id: ""
	I0719 15:52:52.292675   58376 logs.go:276] 0 containers: []
	W0719 15:52:52.292685   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:52.292693   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:52.292755   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:52.326845   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.326875   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.326880   58376 cri.go:89] found id: ""
	I0719 15:52:52.326889   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:52.326946   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.331338   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.335530   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:52.335554   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.371981   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:52.372010   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.406921   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:52.406946   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.442975   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:52.443007   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:52.497838   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:52.497873   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:52.556739   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:52.556776   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:52.665610   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:52.665643   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.711547   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:52.711580   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.759589   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:52.759634   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.807300   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:52.807374   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.857159   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:52.857186   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.917896   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:52.917931   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:53.342603   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:53.342646   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:55.857727   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:52:55.861835   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:52:55.862804   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:55.862822   58376 api_server.go:131] duration metric: took 3.857650801s to wait for apiserver health ...
	I0719 15:52:55.862829   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:55.862852   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:55.862905   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:55.900840   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:55.900859   58376 cri.go:89] found id: ""
	I0719 15:52:55.900866   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:55.900909   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.906205   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:55.906291   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:55.950855   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:55.950879   58376 cri.go:89] found id: ""
	I0719 15:52:55.950887   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:55.950939   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.955407   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:55.955472   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:55.994954   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:55.994981   58376 cri.go:89] found id: ""
	I0719 15:52:55.994992   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:55.995052   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.999179   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:55.999241   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:56.036497   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.036521   58376 cri.go:89] found id: ""
	I0719 15:52:56.036530   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:56.036585   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.041834   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:56.041900   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:56.082911   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.082934   58376 cri.go:89] found id: ""
	I0719 15:52:56.082943   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:56.082998   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.087505   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:56.087571   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:56.124517   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.124544   58376 cri.go:89] found id: ""
	I0719 15:52:56.124554   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:56.124616   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.129221   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:56.129297   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:56.170151   58376 cri.go:89] found id: ""
	I0719 15:52:56.170177   58376 logs.go:276] 0 containers: []
	W0719 15:52:56.170193   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:56.170212   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:56.170292   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:56.218351   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:56.218377   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.218381   58376 cri.go:89] found id: ""
	I0719 15:52:56.218388   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:56.218437   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.223426   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.227742   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:56.227759   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.271701   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:56.271733   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:56.325333   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:56.325366   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:56.431391   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:56.431423   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:56.485442   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:56.485472   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:56.527493   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:56.527525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.563260   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:56.563289   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.600604   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:56.600635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.656262   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:56.656305   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:57.031511   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:57.031549   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:57.046723   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:57.046748   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:57.083358   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:57.083390   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:57.124108   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:57.124136   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:59.670804   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:59.670831   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.670836   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.670840   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.670844   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.670847   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.670850   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.670855   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.670859   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.670865   58376 system_pods.go:74] duration metric: took 3.808031391s to wait for pod list to return data ...
	I0719 15:52:59.670871   58376 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:59.673231   58376 default_sa.go:45] found service account: "default"
	I0719 15:52:59.673249   58376 default_sa.go:55] duration metric: took 2.372657ms for default service account to be created ...
	I0719 15:52:59.673255   58376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:59.678267   58376 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:59.678289   58376 system_pods.go:89] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.678296   58376 system_pods.go:89] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.678303   58376 system_pods.go:89] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.678310   58376 system_pods.go:89] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.678315   58376 system_pods.go:89] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.678322   58376 system_pods.go:89] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.678331   58376 system_pods.go:89] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.678341   58376 system_pods.go:89] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.678352   58376 system_pods.go:126] duration metric: took 5.090968ms to wait for k8s-apps to be running ...
	I0719 15:52:59.678362   58376 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:59.678411   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:59.695116   58376 system_svc.go:56] duration metric: took 16.750228ms WaitForService to wait for kubelet
	I0719 15:52:59.695139   58376 kubeadm.go:582] duration metric: took 4m23.806469478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:59.695163   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:59.697573   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:59.697592   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:59.697602   58376 node_conditions.go:105] duration metric: took 2.433643ms to run NodePressure ...
	I0719 15:52:59.697612   58376 start.go:241] waiting for startup goroutines ...
	I0719 15:52:59.697618   58376 start.go:246] waiting for cluster config update ...
	I0719 15:52:59.697629   58376 start.go:255] writing updated cluster config ...
	I0719 15:52:59.697907   58376 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:59.744965   58376 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:59.746888   58376 out.go:177] * Done! kubectl is now configured to use "embed-certs-817144" cluster and "default" namespace by default
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.874579696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404904874548756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa362b67-3948-42f5-a4b9-4b90fcc5097b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.875229880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26f2d578-09bb-4ae9-9e73-49ca022e7f0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.875300814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26f2d578-09bb-4ae9-9e73-49ca022e7f0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.875590931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26f2d578-09bb-4ae9-9e73-49ca022e7f0c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.924020429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6488653-7b4a-4605-ac7f-fee505d1a47a name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.924211988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6488653-7b4a-4605-ac7f-fee505d1a47a name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.926134185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b06e7cf5-fe8c-48fa-9a26-32bc3d8cfc4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.926493097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404904926471868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b06e7cf5-fe8c-48fa-9a26-32bc3d8cfc4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.927098448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08601b63-d477-4307-a261-76e50ebd7ef5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.927175262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08601b63-d477-4307-a261-76e50ebd7ef5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.927378561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08601b63-d477-4307-a261-76e50ebd7ef5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.976420079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16693bcb-e291-43b8-bfd8-c8023e23ae5b name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.976525242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16693bcb-e291-43b8-bfd8-c8023e23ae5b name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.977636374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e74fe46-acd9-4e76-9cc4-c1991fce98fa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.978138580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404904978108936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e74fe46-acd9-4e76-9cc4-c1991fce98fa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.978679475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d314625e-b742-48a1-9231-8599a985ba14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.978778136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d314625e-b742-48a1-9231-8599a985ba14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:44 no-preload-382231 crio[724]: time="2024-07-19 16:01:44.979175372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d314625e-b742-48a1-9231-8599a985ba14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.027014213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f804bac-0130-42e6-982a-50bcf5ec73e5 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.027115947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f804bac-0130-42e6-982a-50bcf5ec73e5 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.028225798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a679cec-d972-4ba2-8383-73aebdb835bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.028634433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404905028611214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a679cec-d972-4ba2-8383-73aebdb835bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.029214942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=362f4e0e-9b94-4f1b-9ab9-0466a9489ae9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.029288203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=362f4e0e-9b94-4f1b-9ab9-0466a9489ae9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 no-preload-382231 crio[724]: time="2024-07-19 16:01:45.029533450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=362f4e0e-9b94-4f1b-9ab9-0466a9489ae9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4eda5dba755ba       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   5f1258a23c752       kube-proxy-qd84x
	1f64c9c744fd5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8eb60a9774a1e       coredns-5cfdc65f69-zk22p
	936b78e859523       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fa6d4aff4f662       storage-provisioner
	ec70406cf2daf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1f535cc47b65e       coredns-5cfdc65f69-4xxpm
	29cd3efbe0958       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   9e09562f0bff0       kube-controller-manager-no-preload-382231
	16192e7348afc       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   593fe20aa2d26       etcd-no-preload-382231
	1aed6ff1362a0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   39aae21812130       kube-apiserver-no-preload-382231
	76f6c5f0c8688       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   2ad839947a22d       kube-scheduler-no-preload-382231
	a15608be38472       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   29d6eb42a98c6       kube-apiserver-no-preload-382231
	
	
	==> coredns [1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-382231
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-382231
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=no-preload-382231
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:52:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-382231
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 16:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:57:44 +0000   Fri, 19 Jul 2024 15:52:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:57:44 +0000   Fri, 19 Jul 2024 15:52:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:57:44 +0000   Fri, 19 Jul 2024 15:52:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:57:44 +0000   Fri, 19 Jul 2024 15:52:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    no-preload-382231
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 691bf3048c134d3b99ae1d3b2842df38
	  System UUID:                691bf304-8c13-4d3b-99ae-1d3b2842df38
	  Boot ID:                    39770819-d2fb-48d1-b593-69c126cb1da9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-4xxpm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 coredns-5cfdc65f69-zk22p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 etcd-no-preload-382231                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-382231             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-382231    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-qd84x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-no-preload-382231             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-rc6ft              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node no-preload-382231 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node no-preload-382231 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node no-preload-382231 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-382231 event: Registered Node no-preload-382231 in Controller
	
	
	==> dmesg <==
	[  +0.050714] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.535494] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.364388] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.560857] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.917312] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.058776] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062717] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.180281] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.154015] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.289269] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[ +14.949713] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +0.061134] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.954688] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +5.633631] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.459628] kauditd_printk_skb: 86 callbacks suppressed
	[Jul19 15:52] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.079872] systemd-fstab-generator[2941]: Ignoring "noauto" option for root device
	[  +4.396966] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.672565] systemd-fstab-generator[3271]: Ignoring "noauto" option for root device
	[  +5.404903] systemd-fstab-generator[3387]: Ignoring "noauto" option for root device
	[  +0.142154] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.122564] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886] <==
	{"level":"info","ts":"2024-07-19T15:52:23.161203Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:52:23.163371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc switched to configuration voters=(13597188278260378108)"}
	{"level":"info","ts":"2024-07-19T15:52:23.16369Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a9051c714e34311b","local-member-id":"bcb2eab2b5d0a9fc","added-peer-id":"bcb2eab2b5d0a9fc","added-peer-peer-urls":["https://192.168.39.227:2380"]}
	{"level":"info","ts":"2024-07-19T15:52:23.163953Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"info","ts":"2024-07-19T15:52:23.164051Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.227:2380"}
	{"level":"info","ts":"2024-07-19T15:52:23.3999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T15:52:23.400073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T15:52:23.400104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc received MsgPreVoteResp from bcb2eab2b5d0a9fc at term 1"}
	{"level":"info","ts":"2024-07-19T15:52:23.400202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:52:23.400235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc received MsgVoteResp from bcb2eab2b5d0a9fc at term 2"}
	{"level":"info","ts":"2024-07-19T15:52:23.40032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc became leader at term 2"}
	{"level":"info","ts":"2024-07-19T15:52:23.40033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bcb2eab2b5d0a9fc elected leader bcb2eab2b5d0a9fc at term 2"}
	{"level":"info","ts":"2024-07-19T15:52:23.405328Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bcb2eab2b5d0a9fc","local-member-attributes":"{Name:no-preload-382231 ClientURLs:[https://192.168.39.227:2379]}","request-path":"/0/members/bcb2eab2b5d0a9fc/attributes","cluster-id":"a9051c714e34311b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:52:23.40548Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:52:23.405691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:52:23.406266Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.409456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:52:23.418152Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:52:23.412207Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:52:23.414288Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:52:23.417905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a9051c714e34311b","local-member-id":"bcb2eab2b5d0a9fc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.420995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.421049Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.42594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T15:52:23.430912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.227:2379"}
	
	
	==> kernel <==
	 16:01:45 up 14 min,  0 users,  load average: 0.46, 0.28, 0.19
	Linux no-preload-382231 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 15:57:26.590667       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 15:57:26.590693       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 15:57:26.591752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 15:57:26.591870       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:58:26.592203       1 handler_proxy.go:99] no RequestInfo found in the context
	W0719 15:58:26.592327       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 15:58:26.592388       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0719 15:58:26.592442       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0719 15:58:26.593558       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 15:58:26.593676       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:00:26.594655       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:00:26.594840       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0719 16:00:26.594907       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:00:26.594981       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0719 16:00:26.596145       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 16:00:26.596239       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd] <==
	W0719 15:52:18.584607       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.592444       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.610079       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.636080       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.738917       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.743531       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.769728       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.837400       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.854019       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.886602       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.890359       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.912666       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.928266       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.947783       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.953953       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.971158       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.044287       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.086349       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.147043       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.224235       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.304513       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.456622       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.470563       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.571373       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.628231       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6] <==
	E0719 15:56:33.503637       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:56:33.552509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:57:03.510475       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:57:03.561330       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:57:33.517162       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:57:33.570543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 15:57:44.219740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-382231"
	E0719 15:58:03.523945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:58:03.584074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:58:33.530589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:58:33.592417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 15:58:36.105781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="267.542µs"
	I0719 15:58:48.105320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="128.325µs"
	E0719 15:59:03.539243       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:59:03.603217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:59:33.546940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 15:59:33.611366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:00:03.554251       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:00:03.629478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:00:33.561641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:00:33.638526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:01:03.569085       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:01:03.647897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:01:33.576370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:01:33.656065       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 15:52:35.719251       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 15:52:35.730719       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0719 15:52:35.730841       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 15:52:35.772456       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 15:52:35.772527       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:52:35.772562       1 server_linux.go:170] "Using iptables Proxier"
	I0719 15:52:35.775861       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 15:52:35.776204       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 15:52:35.776232       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:52:35.777848       1 config.go:197] "Starting service config controller"
	I0719 15:52:35.777884       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:52:35.777932       1 config.go:104] "Starting endpoint slice config controller"
	I0719 15:52:35.777963       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:52:35.778887       1 config.go:326] "Starting node config controller"
	I0719 15:52:35.778923       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:52:35.879060       1 shared_informer.go:320] Caches are synced for node config
	I0719 15:52:35.879114       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:52:35.879208       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec] <==
	E0719 15:52:25.675760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.675046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:25.676246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0719 15:52:25.675542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 15:52:25.678069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 15:52:25.678163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:25.678243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 15:52:25.678310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.608027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 15:52:26.609114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.653080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:26.653204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.658985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 15:52:26.659597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.799971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 15:52:26.800205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.862930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:26.863051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.943090       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 15:52:26.943214       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0719 15:52:28.766884       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:59:28 no-preload-382231 kubelet[3278]: E0719 15:59:28.161598    3278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 15:59:28 no-preload-382231 kubelet[3278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:59:28 no-preload-382231 kubelet[3278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:59:28 no-preload-382231 kubelet[3278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:59:28 no-preload-382231 kubelet[3278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 15:59:36 no-preload-382231 kubelet[3278]: E0719 15:59:36.088229    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 15:59:48 no-preload-382231 kubelet[3278]: E0719 15:59:48.089302    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:00:03 no-preload-382231 kubelet[3278]: E0719 16:00:03.088936    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:00:15 no-preload-382231 kubelet[3278]: E0719 16:00:15.088424    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:00:27 no-preload-382231 kubelet[3278]: E0719 16:00:27.087491    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:00:28 no-preload-382231 kubelet[3278]: E0719 16:00:28.168914    3278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:00:28 no-preload-382231 kubelet[3278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:00:28 no-preload-382231 kubelet[3278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:00:28 no-preload-382231 kubelet[3278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:00:28 no-preload-382231 kubelet[3278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:00:42 no-preload-382231 kubelet[3278]: E0719 16:00:42.089527    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:00:55 no-preload-382231 kubelet[3278]: E0719 16:00:55.088524    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:01:10 no-preload-382231 kubelet[3278]: E0719 16:01:10.092479    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:01:21 no-preload-382231 kubelet[3278]: E0719 16:01:21.088700    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:01:28 no-preload-382231 kubelet[3278]: E0719 16:01:28.165497    3278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:01:28 no-preload-382231 kubelet[3278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:01:28 no-preload-382231 kubelet[3278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:01:28 no-preload-382231 kubelet[3278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:01:28 no-preload-382231 kubelet[3278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:01:35 no-preload-382231 kubelet[3278]: E0719 16:01:35.089113    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	
	
	==> storage-provisioner [936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3] <==
	I0719 15:52:35.304179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:52:35.390334       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:52:35.400247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 15:52:35.423889       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 15:52:35.424314       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-382231_b3e9a515-3fb8-4ff8-876f-51547a216032!
	I0719 15:52:35.427968       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b4ed317-2d3b-4008-a7e3-0badc1e15741", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-382231_b3e9a515-3fb8-4ff8-876f-51547a216032 became leader
	I0719 15:52:35.528143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-382231_b3e9a515-3fb8-4ff8-876f-51547a216032!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-382231 -n no-preload-382231
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-382231 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-rc6ft
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-382231 describe pod metrics-server-78fcd8795b-rc6ft
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-382231 describe pod metrics-server-78fcd8795b-rc6ft: exit status 1 (97.296412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-rc6ft" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-382231 describe pod metrics-server-78fcd8795b-rc6ft: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (545.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-19 16:01:43.21384405 +0000 UTC m=+6078.009430580
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-601445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-601445 logs -n 25: (2.477467366s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-127438 -- sudo                         | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-127438                                 | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-574044                           | kubernetes-upgrade-574044    | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:44:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:44:39.385142   59208 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:44:39.385249   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385257   59208 out.go:304] Setting ErrFile to fd 2...
	I0719 15:44:39.385261   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385405   59208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:44:39.385919   59208 out.go:298] Setting JSON to false
	I0719 15:44:39.386767   59208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5225,"bootTime":1721398654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:44:39.386817   59208 start.go:139] virtualization: kvm guest
	I0719 15:44:39.390104   59208 out.go:177] * [default-k8s-diff-port-601445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:44:39.391867   59208 notify.go:220] Checking for updates...
	I0719 15:44:39.391890   59208 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:44:39.393463   59208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:44:39.394883   59208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:44:39.396081   59208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:44:39.397280   59208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:44:39.398540   59208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:44:39.400177   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:44:39.400543   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.400601   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.415749   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0719 15:44:39.416104   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.416644   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.416664   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.416981   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.417206   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.417443   59208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:44:39.417751   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.417793   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.432550   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0719 15:44:39.433003   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.433478   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.433504   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.433836   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.434083   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.467474   59208 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:44:38.674498   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:39.468897   59208 start.go:297] selected driver: kvm2
	I0719 15:44:39.468921   59208 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.469073   59208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:44:39.470083   59208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.470178   59208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:44:39.485232   59208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:44:39.485586   59208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:44:39.485616   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:44:39.485624   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:44:39.485666   59208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.485752   59208 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.487537   59208 out.go:177] * Starting "default-k8s-diff-port-601445" primary control-plane node in "default-k8s-diff-port-601445" cluster
	I0719 15:44:39.488672   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:44:39.488709   59208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:44:39.488718   59208 cache.go:56] Caching tarball of preloaded images
	I0719 15:44:39.488795   59208 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:44:39.488807   59208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:44:39.488895   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:44:39.489065   59208 start.go:360] acquireMachinesLock for default-k8s-diff-port-601445: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:44:41.746585   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:47.826521   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:50.898507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:56.978531   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:00.050437   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:06.130631   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:09.202570   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:15.282481   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:18.354537   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:24.434488   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:27.506515   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:33.586522   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:36.658503   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:42.738573   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:45.810538   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:51.890547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:54.962507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:01.042509   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:04.114621   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:10.194576   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:13.266450   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:19.346524   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:22.418506   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:28.498553   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:31.570507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:37.650477   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:40.722569   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:46.802495   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:49.874579   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:55.954547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:59.026454   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:47:02.030619   58417 start.go:364] duration metric: took 4m36.939495617s to acquireMachinesLock for "no-preload-382231"
	I0719 15:47:02.030679   58417 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:02.030685   58417 fix.go:54] fixHost starting: 
	I0719 15:47:02.031010   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:02.031039   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:02.046256   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0719 15:47:02.046682   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:02.047151   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:47:02.047178   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:02.047573   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:02.047818   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:02.048023   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:47:02.049619   58417 fix.go:112] recreateIfNeeded on no-preload-382231: state=Stopped err=<nil>
	I0719 15:47:02.049641   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	W0719 15:47:02.049785   58417 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:02.051800   58417 out.go:177] * Restarting existing kvm2 VM for "no-preload-382231" ...
	I0719 15:47:02.028090   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:02.028137   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028489   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:47:02.028517   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:47:02.030488   58376 machine.go:97] duration metric: took 4m37.428160404s to provisionDockerMachine
	I0719 15:47:02.030529   58376 fix.go:56] duration metric: took 4m37.450063037s for fixHost
	I0719 15:47:02.030535   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 4m37.450081944s
	W0719 15:47:02.030559   58376 start.go:714] error starting host: provision: host is not running
	W0719 15:47:02.030673   58376 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 15:47:02.030686   58376 start.go:729] Will try again in 5 seconds ...
	I0719 15:47:02.053160   58417 main.go:141] libmachine: (no-preload-382231) Calling .Start
	I0719 15:47:02.053325   58417 main.go:141] libmachine: (no-preload-382231) Ensuring networks are active...
	I0719 15:47:02.054289   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network default is active
	I0719 15:47:02.054786   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network mk-no-preload-382231 is active
	I0719 15:47:02.055259   58417 main.go:141] libmachine: (no-preload-382231) Getting domain xml...
	I0719 15:47:02.056202   58417 main.go:141] libmachine: (no-preload-382231) Creating domain...
	I0719 15:47:03.270495   58417 main.go:141] libmachine: (no-preload-382231) Waiting to get IP...
	I0719 15:47:03.271595   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.272074   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.272151   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.272057   59713 retry.go:31] will retry after 239.502065ms: waiting for machine to come up
	I0719 15:47:03.513745   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.514224   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.514264   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.514191   59713 retry.go:31] will retry after 315.982717ms: waiting for machine to come up
	I0719 15:47:03.831739   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.832155   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.832187   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.832111   59713 retry.go:31] will retry after 468.820113ms: waiting for machine to come up
	I0719 15:47:04.302865   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.303273   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.303306   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.303236   59713 retry.go:31] will retry after 526.764683ms: waiting for machine to come up
	I0719 15:47:04.832048   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.832551   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.832583   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.832504   59713 retry.go:31] will retry after 754.533212ms: waiting for machine to come up
	I0719 15:47:07.032310   58376 start.go:360] acquireMachinesLock for embed-certs-817144: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:05.588374   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:05.588834   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:05.588862   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:05.588785   59713 retry.go:31] will retry after 757.18401ms: waiting for machine to come up
	I0719 15:47:06.347691   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:06.348135   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:06.348164   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:06.348053   59713 retry.go:31] will retry after 1.097437331s: waiting for machine to come up
	I0719 15:47:07.446836   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:07.447199   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:07.447219   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:07.447158   59713 retry.go:31] will retry after 1.448513766s: waiting for machine to come up
	I0719 15:47:08.897886   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:08.898289   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:08.898317   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:08.898216   59713 retry.go:31] will retry after 1.583843671s: waiting for machine to come up
	I0719 15:47:10.483476   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:10.483934   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:10.483963   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:10.483864   59713 retry.go:31] will retry after 1.86995909s: waiting for machine to come up
	I0719 15:47:12.355401   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:12.355802   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:12.355827   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:12.355762   59713 retry.go:31] will retry after 2.577908462s: waiting for machine to come up
	I0719 15:47:14.934837   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:14.935263   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:14.935285   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:14.935225   59713 retry.go:31] will retry after 3.158958575s: waiting for machine to come up
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:18.095456   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095912   58417 main.go:141] libmachine: (no-preload-382231) Found IP for machine: 192.168.39.227
	I0719 15:47:18.095936   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has current primary IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095942   58417 main.go:141] libmachine: (no-preload-382231) Reserving static IP address...
	I0719 15:47:18.096317   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.096357   58417 main.go:141] libmachine: (no-preload-382231) Reserved static IP address: 192.168.39.227
	I0719 15:47:18.096376   58417 main.go:141] libmachine: (no-preload-382231) DBG | skip adding static IP to network mk-no-preload-382231 - found existing host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"}
	I0719 15:47:18.096392   58417 main.go:141] libmachine: (no-preload-382231) DBG | Getting to WaitForSSH function...
	I0719 15:47:18.096407   58417 main.go:141] libmachine: (no-preload-382231) Waiting for SSH to be available...
	I0719 15:47:18.098619   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.098978   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.099008   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.099122   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH client type: external
	I0719 15:47:18.099151   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa (-rw-------)
	I0719 15:47:18.099183   58417 main.go:141] libmachine: (no-preload-382231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:18.099196   58417 main.go:141] libmachine: (no-preload-382231) DBG | About to run SSH command:
	I0719 15:47:18.099210   58417 main.go:141] libmachine: (no-preload-382231) DBG | exit 0
	I0719 15:47:18.222285   58417 main.go:141] libmachine: (no-preload-382231) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:18.222607   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetConfigRaw
	I0719 15:47:18.223181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.225751   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226062   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.226105   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226327   58417 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:47:18.226504   58417 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:18.226520   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:18.226684   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.228592   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.228936   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.228960   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.229094   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.229246   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229398   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229516   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.229663   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.229887   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.229901   58417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:18.330731   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:18.330764   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331053   58417 buildroot.go:166] provisioning hostname "no-preload-382231"
	I0719 15:47:18.331084   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331282   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.333905   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334212   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.334270   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334331   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.334510   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334705   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334850   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.335030   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.335216   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.335230   58417 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-382231 && echo "no-preload-382231" | sudo tee /etc/hostname
	I0719 15:47:18.453128   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-382231
	
	I0719 15:47:18.453151   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.455964   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456323   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.456349   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456549   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.456822   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457010   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457158   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.457300   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.457535   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.457561   58417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-382231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-382231/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-382231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:18.568852   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:18.568878   58417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:18.568902   58417 buildroot.go:174] setting up certificates
	I0719 15:47:18.568915   58417 provision.go:84] configureAuth start
	I0719 15:47:18.568924   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.569240   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.571473   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.571757   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.571783   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.572029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.573941   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574213   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.574247   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574393   58417 provision.go:143] copyHostCerts
	I0719 15:47:18.574455   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:18.574465   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:18.574528   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:18.574615   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:18.574622   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:18.574645   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:18.574696   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:18.574703   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:18.574722   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:18.574768   58417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.no-preload-382231 san=[127.0.0.1 192.168.39.227 localhost minikube no-preload-382231]
	I0719 15:47:18.636408   58417 provision.go:177] copyRemoteCerts
	I0719 15:47:18.636458   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:18.636477   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.638719   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639021   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.639054   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639191   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.639379   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.639532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.639795   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:18.720305   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:18.742906   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:18.764937   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:47:18.787183   58417 provision.go:87] duration metric: took 218.257504ms to configureAuth
	I0719 15:47:18.787205   58417 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:18.787355   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:47:18.787418   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.789685   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.789992   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.790017   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.790181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.790366   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790632   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.790770   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.790929   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.790943   58417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:19.053326   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:19.053350   58417 machine.go:97] duration metric: took 826.83404ms to provisionDockerMachine
	I0719 15:47:19.053364   58417 start.go:293] postStartSetup for "no-preload-382231" (driver="kvm2")
	I0719 15:47:19.053379   58417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:19.053409   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.053733   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:19.053755   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.056355   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056709   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.056737   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056884   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.057037   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.057172   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.057370   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.136785   58417 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:19.140756   58417 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:19.140777   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:19.140847   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:19.140941   58417 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:19.141044   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:19.150247   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:19.172800   58417 start.go:296] duration metric: took 119.424607ms for postStartSetup
	I0719 15:47:19.172832   58417 fix.go:56] duration metric: took 17.142146552s for fixHost
	I0719 15:47:19.172849   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.175427   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.175816   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.175851   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.176027   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.176281   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176636   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.176892   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:19.177051   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:19.177061   58417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:19.278564   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404039.251890495
	
	I0719 15:47:19.278594   58417 fix.go:216] guest clock: 1721404039.251890495
	I0719 15:47:19.278605   58417 fix.go:229] Guest: 2024-07-19 15:47:19.251890495 +0000 UTC Remote: 2024-07-19 15:47:19.172835531 +0000 UTC m=+294.220034318 (delta=79.054964ms)
	I0719 15:47:19.278651   58417 fix.go:200] guest clock delta is within tolerance: 79.054964ms
	I0719 15:47:19.278659   58417 start.go:83] releasing machines lock for "no-preload-382231", held for 17.247997118s
	I0719 15:47:19.278692   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.279029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:19.281674   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282034   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.282063   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282221   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282750   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282935   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282991   58417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:19.283061   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.283095   58417 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:19.283116   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.285509   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285805   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.285828   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285846   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285959   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286182   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286276   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.286300   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.286468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286481   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286632   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.286672   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286806   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286935   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.363444   58417 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:19.387514   58417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:19.545902   58417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:19.551747   58417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:19.551812   58417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:19.568563   58417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:19.568589   58417 start.go:495] detecting cgroup driver to use...
	I0719 15:47:19.568654   58417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:19.589440   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:19.604889   58417 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:19.604962   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:19.624114   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:19.638265   58417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:19.752880   58417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:19.900078   58417 docker.go:233] disabling docker service ...
	I0719 15:47:19.900132   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:19.914990   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:19.928976   58417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:20.079363   58417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:20.203629   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:20.218502   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:20.237028   58417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 15:47:20.237089   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.248514   58417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:20.248597   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.260162   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.272166   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.283341   58417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:20.294687   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.305495   58417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.328024   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.339666   58417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:20.349271   58417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:20.349314   58417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:20.364130   58417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:20.376267   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:20.501259   58417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:20.643763   58417 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:20.643828   58417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:20.648525   58417 start.go:563] Will wait 60s for crictl version
	I0719 15:47:20.648586   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:20.652256   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:20.689386   58417 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:20.689468   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.720662   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.751393   58417 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:20.752939   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:20.755996   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756367   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:20.756395   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756723   58417 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:20.760962   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:20.776973   58417 kubeadm.go:883] updating cluster {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:20.777084   58417 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 15:47:20.777120   58417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:20.814520   58417 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 15:47:20.814547   58417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:20.814631   58417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:20.814650   58417 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.814657   58417 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.814682   58417 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.814637   58417 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.814736   58417 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.814808   58417 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.814742   58417 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.816435   58417 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.816446   58417 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.816513   58417 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.816535   58417 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816559   58417 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.816719   58417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:21.003845   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 15:47:21.028954   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.039628   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.041391   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.065499   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.084966   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.142812   58417 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 15:47:21.142873   58417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 15:47:21.142905   58417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.142921   58417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 15:47:21.142939   58417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.142962   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142877   58417 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.143025   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142983   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.160141   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.182875   58417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 15:47:21.182918   58417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.182945   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.182958   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.182957   58417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 15:47:21.182992   58417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.183029   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.183044   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.183064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.272688   58417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 15:47:21.272724   58417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.272768   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.272783   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272825   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.272876   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272906   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 15:47:21.272931   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.272971   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:21.272997   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 15:47:21.273064   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:21.326354   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326356   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.326441   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 15:47:21.326457   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326459   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326492   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326497   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.326529   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 15:47:21.326535   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 15:47:21.326633   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.363401   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:21.363496   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:22.268448   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.010876   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.684346805s)
	I0719 15:47:24.010910   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 15:47:24.010920   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.684439864s)
	I0719 15:47:24.010952   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 15:47:24.010930   58417 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.010993   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.684342001s)
	I0719 15:47:24.011014   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.011019   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011046   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.647533327s)
	I0719 15:47:24.011066   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011098   58417 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742620594s)
	I0719 15:47:24.011137   58417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 15:47:24.011170   58417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.011204   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:27.292973   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.281931356s)
	I0719 15:47:27.293008   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 15:47:27.293001   58417 ssh_runner.go:235] Completed: which crictl: (3.281778521s)
	I0719 15:47:27.293043   58417 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:27.293064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:27.293086   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:29.269642   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976526914s)
	I0719 15:47:29.269676   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 15:47:29.269698   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269641   58417 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.97655096s)
	I0719 15:47:29.269748   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269773   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 15:47:29.269875   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:31.242199   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.972421845s)
	I0719 15:47:31.242257   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 15:47:31.242273   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972374564s)
	I0719 15:47:31.242283   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:31.242306   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 15:47:31.242334   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:32.592736   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.350379333s)
	I0719 15:47:32.592762   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 15:47:32.592782   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:32.592817   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:34.547084   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954243196s)
	I0719 15:47:34.547122   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 15:47:34.547155   58417 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:34.547231   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.759098   59208 start.go:364] duration metric: took 2m59.27000152s to acquireMachinesLock for "default-k8s-diff-port-601445"
	I0719 15:47:38.759165   59208 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:38.759176   59208 fix.go:54] fixHost starting: 
	I0719 15:47:38.759633   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:38.759685   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:38.779587   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0719 15:47:38.779979   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:38.780480   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:47:38.780497   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:38.780888   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:38.781129   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:38.781260   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:47:38.782786   59208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601445: state=Stopped err=<nil>
	I0719 15:47:38.782860   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	W0719 15:47:38.783056   59208 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:38.785037   59208 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-601445" ...
	I0719 15:47:38.786497   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Start
	I0719 15:47:38.786691   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring networks are active...
	I0719 15:47:38.787520   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network default is active
	I0719 15:47:38.787819   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network mk-default-k8s-diff-port-601445 is active
	I0719 15:47:38.788418   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Getting domain xml...
	I0719 15:47:38.789173   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Creating domain...
	I0719 15:47:35.191148   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 15:47:35.191193   58417 cache_images.go:123] Successfully loaded all cached images
	I0719 15:47:35.191198   58417 cache_images.go:92] duration metric: took 14.376640053s to LoadCachedImages
	I0719 15:47:35.191209   58417 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0-beta.0 crio true true} ...
	I0719 15:47:35.191329   58417 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-382231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:35.191424   58417 ssh_runner.go:195] Run: crio config
	I0719 15:47:35.236248   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:35.236276   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:35.236288   58417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:35.236309   58417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-382231 NodeName:no-preload-382231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:47:35.236464   58417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-382231"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:35.236525   58417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 15:47:35.247524   58417 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:35.247611   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:35.257583   58417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 15:47:35.275057   58417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 15:47:35.291468   58417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 15:47:35.308021   58417 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:35.312121   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:35.324449   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:35.451149   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:35.477844   58417 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231 for IP: 192.168.39.227
	I0719 15:47:35.477868   58417 certs.go:194] generating shared ca certs ...
	I0719 15:47:35.477887   58417 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:35.478043   58417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:35.478093   58417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:35.478103   58417 certs.go:256] generating profile certs ...
	I0719 15:47:35.478174   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.key
	I0719 15:47:35.478301   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key.46f9a235
	I0719 15:47:35.478339   58417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key
	I0719 15:47:35.478482   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:35.478520   58417 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:35.478530   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:35.478549   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:35.478569   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:35.478591   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:35.478628   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:35.479291   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:35.523106   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:35.546934   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:35.585616   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:35.617030   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:47:35.641486   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:47:35.680051   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:35.703679   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:47:35.728088   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:35.751219   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:35.774149   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:35.796985   58417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:35.813795   58417 ssh_runner.go:195] Run: openssl version
	I0719 15:47:35.819568   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:35.830350   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834792   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834847   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.840531   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:35.851584   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:35.862655   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867139   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867199   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.872916   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:35.883986   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:35.894795   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899001   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899049   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.904496   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:35.915180   58417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:35.919395   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:35.926075   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:35.931870   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:35.938089   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:35.944079   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:35.950449   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:35.956291   58417 kubeadm.go:392] StartCluster: {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:35.956396   58417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:35.956452   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:35.993976   58417 cri.go:89] found id: ""
	I0719 15:47:35.994047   58417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:36.004507   58417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:36.004532   58417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:36.004579   58417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:36.014644   58417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:36.015628   58417 kubeconfig.go:125] found "no-preload-382231" server: "https://192.168.39.227:8443"
	I0719 15:47:36.017618   58417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:36.027252   58417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0719 15:47:36.027281   58417 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:36.027292   58417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:36.027350   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:36.066863   58417 cri.go:89] found id: ""
	I0719 15:47:36.066934   58417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:36.082971   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:36.092782   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:36.092802   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:36.092841   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:36.101945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:36.101998   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:36.111368   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:36.120402   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:36.120447   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:36.130124   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.138945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:36.138990   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.148176   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:36.157008   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:36.157060   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:36.166273   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:36.176032   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:36.291855   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.285472   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.476541   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.547807   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.652551   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:37.652649   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.153088   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.653690   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.718826   58417 api_server.go:72] duration metric: took 1.066275053s to wait for apiserver process to appear ...
	I0719 15:47:38.718858   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:47:38.718891   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:41.984204   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:41.984237   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:41.984255   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.031024   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:42.031055   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:42.219815   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.256851   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.256888   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:42.719015   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.756668   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.756705   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.219173   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.255610   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:43.255645   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.719116   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.725453   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:47:43.739070   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:47:43.739108   58417 api_server.go:131] duration metric: took 5.020238689s to wait for apiserver health ...
	I0719 15:47:43.739119   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:43.739128   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:43.741458   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:47:40.069048   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting to get IP...
	I0719 15:47:40.069866   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070409   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070480   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.070379   59996 retry.go:31] will retry after 299.168281ms: waiting for machine to come up
	I0719 15:47:40.370939   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371381   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.371340   59996 retry.go:31] will retry after 388.345842ms: waiting for machine to come up
	I0719 15:47:40.761301   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762861   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762889   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.762797   59996 retry.go:31] will retry after 305.39596ms: waiting for machine to come up
	I0719 15:47:41.070215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070791   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070823   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.070746   59996 retry.go:31] will retry after 452.50233ms: waiting for machine to come up
	I0719 15:47:41.525465   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.525997   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.526019   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.525920   59996 retry.go:31] will retry after 686.050268ms: waiting for machine to come up
	I0719 15:47:42.214012   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214513   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214545   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:42.214465   59996 retry.go:31] will retry after 867.815689ms: waiting for machine to come up
	I0719 15:47:43.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084240   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:43.084198   59996 retry.go:31] will retry after 1.006018507s: waiting for machine to come up
	I0719 15:47:44.092571   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093050   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:44.092992   59996 retry.go:31] will retry after 961.604699ms: waiting for machine to come up
	I0719 15:47:43.743125   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:47:43.780558   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:47:43.825123   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:47:43.849564   58417 system_pods.go:59] 8 kube-system pods found
	I0719 15:47:43.849608   58417 system_pods.go:61] "coredns-5cfdc65f69-9p4dr" [b6744bc9-b683-4f7e-b506-a95eb58ac308] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:47:43.849620   58417 system_pods.go:61] "etcd-no-preload-382231" [1f2704ae-84a0-4636-9826-f6bb5d2cb8b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:47:43.849632   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [e4ae90fb-9024-4420-9249-6f936ff43894] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:47:43.849643   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [ceb3538d-a6b9-4135-b044-b139003baf35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:47:43.849650   58417 system_pods.go:61] "kube-proxy-z2z9r" [fdc0eb8f-2884-436b-ba1e-4c71107f756c] Running
	I0719 15:47:43.849657   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [5ae3221b-7186-4dbe-9b1b-fb4c8c239c62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:47:43.849677   58417 system_pods.go:61] "metrics-server-78fcd8795b-zwr8g" [4d4de9aa-89f2-4cf4-85c2-26df25bd82c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:47:43.849687   58417 system_pods.go:61] "storage-provisioner" [ab5ce17f-a0da-4ab7-803e-245ba4363d09] Running
	I0719 15:47:43.849696   58417 system_pods.go:74] duration metric: took 24.54438ms to wait for pod list to return data ...
	I0719 15:47:43.849709   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:47:43.864512   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:47:43.864636   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:47:43.864684   58417 node_conditions.go:105] duration metric: took 14.967708ms to run NodePressure ...
	I0719 15:47:43.864727   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:44.524399   58417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531924   58417 kubeadm.go:739] kubelet initialised
	I0719 15:47:44.531944   58417 kubeadm.go:740] duration metric: took 7.516197ms waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531952   58417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:47:44.538016   58417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:45.055856   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056318   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056347   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:45.056263   59996 retry.go:31] will retry after 1.300059023s: waiting for machine to come up
	I0719 15:47:46.357875   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358379   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358407   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:46.358331   59996 retry.go:31] will retry after 2.269558328s: waiting for machine to come up
	I0719 15:47:48.630965   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631641   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631674   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:48.631546   59996 retry.go:31] will retry after 2.829487546s: waiting for machine to come up
	I0719 15:47:47.449778   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:48.045481   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:48.045508   58417 pod_ready.go:81] duration metric: took 3.507466621s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.045521   58417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.463569   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464003   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:51.463968   59996 retry.go:31] will retry after 2.917804786s: waiting for machine to come up
	I0719 15:47:54.383261   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383967   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383993   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:54.383924   59996 retry.go:31] will retry after 4.044917947s: waiting for machine to come up
	I0719 15:47:50.052168   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:51.052114   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:51.052135   58417 pod_ready.go:81] duration metric: took 3.006607122s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:51.052144   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059540   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:52.059563   58417 pod_ready.go:81] duration metric: took 1.007411773s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059576   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.066338   58417 pod_ready.go:102] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:54.567056   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.567076   58417 pod_ready.go:81] duration metric: took 2.507493559s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.567085   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571655   58417 pod_ready.go:92] pod "kube-proxy-z2z9r" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.571672   58417 pod_ready.go:81] duration metric: took 4.581191ms for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571680   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.575983   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.576005   58417 pod_ready.go:81] duration metric: took 4.315788ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.576017   58417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.432420   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432945   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Found IP for machine: 192.168.61.144
	I0719 15:47:58.432976   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has current primary IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432988   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserving static IP address...
	I0719 15:47:58.433361   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.433395   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | skip adding static IP to network mk-default-k8s-diff-port-601445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"}
	I0719 15:47:58.433412   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserved static IP address: 192.168.61.144
	I0719 15:47:58.433430   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for SSH to be available...
	I0719 15:47:58.433442   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Getting to WaitForSSH function...
	I0719 15:47:58.435448   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435770   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.435807   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435868   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH client type: external
	I0719 15:47:58.435930   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa (-rw-------)
	I0719 15:47:58.435973   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:58.435992   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | About to run SSH command:
	I0719 15:47:58.436002   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | exit 0
	I0719 15:47:58.562187   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:58.562564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetConfigRaw
	I0719 15:47:58.563233   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.565694   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566042   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.566066   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566301   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:47:58.566469   59208 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:58.566489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:58.566684   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.569109   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569485   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.569512   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569594   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.569763   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.569912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.570022   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.570167   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.570398   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.570412   59208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:58.675164   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:58.675217   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675455   59208 buildroot.go:166] provisioning hostname "default-k8s-diff-port-601445"
	I0719 15:47:58.675487   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.678103   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678522   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.678564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678721   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.678908   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679074   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679198   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.679345   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.679516   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.679531   59208 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-601445 && echo "default-k8s-diff-port-601445" | sudo tee /etc/hostname
	I0719 15:47:58.802305   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-601445
	
	I0719 15:47:58.802336   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.805215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805582   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.805613   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805796   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.805981   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806139   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806322   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.806517   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.806689   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.806706   59208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-601445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-601445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-601445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:58.919959   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:58.919985   59208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:58.920019   59208 buildroot.go:174] setting up certificates
	I0719 15:47:58.920031   59208 provision.go:84] configureAuth start
	I0719 15:47:58.920041   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.920283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.922837   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923193   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.923225   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923413   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.925832   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926128   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.926156   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926297   59208 provision.go:143] copyHostCerts
	I0719 15:47:58.926360   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:58.926374   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:58.926425   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:58.926512   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:58.926520   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:58.926543   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:58.926600   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:58.926609   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:58.926630   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:58.926682   59208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-601445 san=[127.0.0.1 192.168.61.144 default-k8s-diff-port-601445 localhost minikube]
	I0719 15:47:59.080911   59208 provision.go:177] copyRemoteCerts
	I0719 15:47:59.080966   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:59.080990   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084029   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.084059   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084219   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.084411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.084531   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.084674   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.172754   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:59.198872   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 15:47:59.222898   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:47:59.246017   59208 provision.go:87] duration metric: took 325.975105ms to configureAuth
	I0719 15:47:59.246037   59208 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:59.246215   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:47:59.246312   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.248757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249079   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.249111   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249354   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.249526   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249679   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249779   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.249924   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.250142   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.250161   59208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:59.743101   58376 start.go:364] duration metric: took 52.710718223s to acquireMachinesLock for "embed-certs-817144"
	I0719 15:47:59.743169   58376 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:59.743177   58376 fix.go:54] fixHost starting: 
	I0719 15:47:59.743553   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:59.743591   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:59.760837   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0719 15:47:59.761216   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:59.761734   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:47:59.761754   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:59.762080   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:59.762291   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:47:59.762504   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:47:59.764044   58376 fix.go:112] recreateIfNeeded on embed-certs-817144: state=Stopped err=<nil>
	I0719 15:47:59.764067   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	W0719 15:47:59.764217   58376 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:59.766063   58376 out.go:177] * Restarting existing kvm2 VM for "embed-certs-817144" ...
	I0719 15:47:56.582753   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:58.583049   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:59.508289   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:59.508327   59208 machine.go:97] duration metric: took 941.842272ms to provisionDockerMachine
	I0719 15:47:59.508343   59208 start.go:293] postStartSetup for "default-k8s-diff-port-601445" (driver="kvm2")
	I0719 15:47:59.508359   59208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:59.508383   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.508687   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:59.508720   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.511449   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.511887   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.511911   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.512095   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.512275   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.512437   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.512580   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.596683   59208 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:59.600761   59208 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:59.600782   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:59.600841   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:59.600911   59208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:59.600996   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:59.609867   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:59.633767   59208 start.go:296] duration metric: took 125.408568ms for postStartSetup
	I0719 15:47:59.633803   59208 fix.go:56] duration metric: took 20.874627736s for fixHost
	I0719 15:47:59.633825   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.636600   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.636944   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.636977   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.637121   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.637328   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637495   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637640   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.637811   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.637989   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.637999   59208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:59.742929   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404079.728807147
	
	I0719 15:47:59.742957   59208 fix.go:216] guest clock: 1721404079.728807147
	I0719 15:47:59.742967   59208 fix.go:229] Guest: 2024-07-19 15:47:59.728807147 +0000 UTC Remote: 2024-07-19 15:47:59.633807395 +0000 UTC m=+200.280673126 (delta=94.999752ms)
	I0719 15:47:59.743008   59208 fix.go:200] guest clock delta is within tolerance: 94.999752ms
	I0719 15:47:59.743013   59208 start.go:83] releasing machines lock for "default-k8s-diff-port-601445", held for 20.983876369s
	I0719 15:47:59.743040   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.743262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:59.746145   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746501   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.746534   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746662   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747297   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747461   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747553   59208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:59.747603   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.747714   59208 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:59.747738   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.750268   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750583   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750751   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750916   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750932   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.750942   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.751127   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751170   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.751269   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751353   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751421   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.751489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751646   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.834888   59208 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:59.859285   59208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:00.009771   59208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:00.015906   59208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:00.015973   59208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:00.032129   59208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:00.032150   59208 start.go:495] detecting cgroup driver to use...
	I0719 15:48:00.032215   59208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:00.050052   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:00.063282   59208 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:00.063341   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:00.078073   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:00.092872   59208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:00.217105   59208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:00.364335   59208 docker.go:233] disabling docker service ...
	I0719 15:48:00.364403   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:00.384138   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:00.400280   59208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:00.543779   59208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:00.671512   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:00.687337   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:00.708629   59208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:00.708690   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.720508   59208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:00.720580   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.732952   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.743984   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.756129   59208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:00.766873   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.777481   59208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.799865   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.812450   59208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:00.822900   59208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:00.822964   59208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:00.836117   59208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:00.845958   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:00.959002   59208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:01.104519   59208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:01.104598   59208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:01.110652   59208 start.go:563] Will wait 60s for crictl version
	I0719 15:48:01.110711   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:48:01.114358   59208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:01.156969   59208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:01.157063   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.187963   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.219925   59208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.221101   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:48:01.224369   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:01.224789   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224989   59208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:01.229813   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:01.243714   59208 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:01.243843   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:01.243886   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:01.283013   59208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:01.283093   59208 ssh_runner.go:195] Run: which lz4
	I0719 15:48:01.287587   59208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:01.291937   59208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:01.291965   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:02.810751   59208 crio.go:462] duration metric: took 1.52319928s to copy over tarball
	I0719 15:48:02.810846   59208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:59.767270   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Start
	I0719 15:47:59.767433   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring networks are active...
	I0719 15:47:59.768056   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network default is active
	I0719 15:47:59.768371   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network mk-embed-certs-817144 is active
	I0719 15:47:59.768804   58376 main.go:141] libmachine: (embed-certs-817144) Getting domain xml...
	I0719 15:47:59.769396   58376 main.go:141] libmachine: (embed-certs-817144) Creating domain...
	I0719 15:48:01.024457   58376 main.go:141] libmachine: (embed-certs-817144) Waiting to get IP...
	I0719 15:48:01.025252   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.025697   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.025741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.025660   60153 retry.go:31] will retry after 211.260956ms: waiting for machine to come up
	I0719 15:48:01.238027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.238561   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.238588   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.238529   60153 retry.go:31] will retry after 346.855203ms: waiting for machine to come up
	I0719 15:48:01.587201   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.587773   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.587815   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.587736   60153 retry.go:31] will retry after 327.69901ms: waiting for machine to come up
	I0719 15:48:01.917433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.917899   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.917931   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.917864   60153 retry.go:31] will retry after 474.430535ms: waiting for machine to come up
	I0719 15:48:02.393610   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.394139   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.394168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.394061   60153 retry.go:31] will retry after 491.247455ms: waiting for machine to come up
	I0719 15:48:02.886826   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.887296   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.887329   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.887249   60153 retry.go:31] will retry after 661.619586ms: waiting for machine to come up
	I0719 15:48:03.550633   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:03.551175   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:03.551199   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:03.551126   60153 retry.go:31] will retry after 1.10096194s: waiting for machine to come up
	I0719 15:48:00.583866   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:02.585144   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.112520   59208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301644218s)
	I0719 15:48:05.112555   59208 crio.go:469] duration metric: took 2.301774418s to extract the tarball
	I0719 15:48:05.112565   59208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:05.151199   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:05.193673   59208 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:05.193701   59208 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:05.193712   59208 kubeadm.go:934] updating node { 192.168.61.144 8444 v1.30.3 crio true true} ...
	I0719 15:48:05.193836   59208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-601445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:05.193919   59208 ssh_runner.go:195] Run: crio config
	I0719 15:48:05.239103   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:05.239131   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:05.239146   59208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:05.239176   59208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-601445 NodeName:default-k8s-diff-port-601445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:05.239374   59208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-601445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:05.239441   59208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:05.249729   59208 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:05.249799   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:05.259540   59208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 15:48:05.277388   59208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:05.294497   59208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 15:48:05.313990   59208 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:05.318959   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:05.332278   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:05.463771   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:05.480474   59208 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445 for IP: 192.168.61.144
	I0719 15:48:05.480499   59208 certs.go:194] generating shared ca certs ...
	I0719 15:48:05.480520   59208 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:05.480674   59208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:05.480732   59208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:05.480746   59208 certs.go:256] generating profile certs ...
	I0719 15:48:05.480859   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.key
	I0719 15:48:05.480937   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key.e31ea710
	I0719 15:48:05.480992   59208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key
	I0719 15:48:05.481128   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:05.481165   59208 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:05.481180   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:05.481210   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:05.481245   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:05.481276   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:05.481334   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:05.481940   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:05.524604   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:05.562766   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:05.618041   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:05.660224   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 15:48:05.689232   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:05.713890   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:05.738923   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:05.764447   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:05.793905   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:05.823630   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:05.849454   59208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:05.868309   59208 ssh_runner.go:195] Run: openssl version
	I0719 15:48:05.874423   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:05.887310   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.891994   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.892057   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.898173   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:05.911541   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:05.922829   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927537   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927600   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.933642   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:05.946269   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:05.958798   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963899   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963959   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.969801   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:05.980966   59208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:05.985487   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:05.991303   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:05.997143   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:06.003222   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:06.008984   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:06.014939   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:06.020976   59208 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:06.021059   59208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:06.021106   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.066439   59208 cri.go:89] found id: ""
	I0719 15:48:06.066503   59208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:06.080640   59208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:06.080663   59208 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:06.080730   59208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:06.093477   59208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:06.094740   59208 kubeconfig.go:125] found "default-k8s-diff-port-601445" server: "https://192.168.61.144:8444"
	I0719 15:48:06.096907   59208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:06.107974   59208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.144
	I0719 15:48:06.108021   59208 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:06.108035   59208 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:06.108109   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.156149   59208 cri.go:89] found id: ""
	I0719 15:48:06.156222   59208 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:06.172431   59208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:06.182482   59208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:06.182511   59208 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:06.182562   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 15:48:06.192288   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:06.192361   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:06.202613   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 15:48:06.212553   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:06.212624   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:06.223086   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.233949   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:06.234007   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.247224   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 15:48:06.257851   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:06.257908   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:06.268650   59208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:06.279549   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:06.421964   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.407768   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.614213   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.686560   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.769476   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:07.769590   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.270472   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.770366   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.795057   59208 api_server.go:72] duration metric: took 1.025580277s to wait for apiserver process to appear ...
	I0719 15:48:08.795086   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:08.795112   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:08.795617   59208 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0719 15:48:09.295459   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:04.653309   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:04.653784   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:04.653846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:04.653753   60153 retry.go:31] will retry after 1.276153596s: waiting for machine to come up
	I0719 15:48:05.931365   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:05.931820   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:05.931848   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:05.931798   60153 retry.go:31] will retry after 1.372328403s: waiting for machine to come up
	I0719 15:48:07.305390   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:07.305892   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:07.305922   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:07.305850   60153 retry.go:31] will retry after 1.738311105s: waiting for machine to come up
	I0719 15:48:09.046095   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:09.046526   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:09.046558   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:09.046481   60153 retry.go:31] will retry after 2.169449629s: waiting for machine to come up
	I0719 15:48:05.084157   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:07.583246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:09.584584   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:11.457584   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.457651   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.457670   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.490130   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.490165   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.795439   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.803724   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:11.803757   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.295287   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.300002   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:12.300034   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.795285   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.800067   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:48:12.808020   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:12.808045   59208 api_server.go:131] duration metric: took 4.012952016s to wait for apiserver health ...
	I0719 15:48:12.808055   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:12.808064   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:12.810134   59208 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.812011   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:12.824520   59208 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:12.846711   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:12.855286   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:12.855315   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:12.855322   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:12.855329   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:12.855335   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:12.855345   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:12.855353   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:12.855360   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:12.855369   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:12.855377   59208 system_pods.go:74] duration metric: took 8.645314ms to wait for pod list to return data ...
	I0719 15:48:12.855390   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:12.858531   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:12.858556   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:12.858566   59208 node_conditions.go:105] duration metric: took 3.171526ms to run NodePressure ...
	I0719 15:48:12.858581   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:13.176014   59208 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180575   59208 kubeadm.go:739] kubelet initialised
	I0719 15:48:13.180602   59208 kubeadm.go:740] duration metric: took 4.561708ms waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180612   59208 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:13.187723   59208 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.204023   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204052   59208 pod_ready.go:81] duration metric: took 16.303152ms for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.204061   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204070   59208 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.212768   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212790   59208 pod_ready.go:81] duration metric: took 8.709912ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.212800   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212812   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.220452   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220474   59208 pod_ready.go:81] duration metric: took 7.650656ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.220482   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220489   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.251973   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.251997   59208 pod_ready.go:81] duration metric: took 31.499608ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.252008   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.252029   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.650914   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650940   59208 pod_ready.go:81] duration metric: took 398.904724ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.650948   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650954   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.050582   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050615   59208 pod_ready.go:81] duration metric: took 399.652069ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.050630   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050642   59208 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.450349   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450379   59208 pod_ready.go:81] duration metric: took 399.72875ms for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.450391   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450399   59208 pod_ready.go:38] duration metric: took 1.269776818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:14.450416   59208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:14.462296   59208 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:14.462318   59208 kubeadm.go:597] duration metric: took 8.38163922s to restartPrimaryControlPlane
	I0719 15:48:14.462329   59208 kubeadm.go:394] duration metric: took 8.441360513s to StartCluster
	I0719 15:48:14.462348   59208 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.462422   59208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:14.464082   59208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.464400   59208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:14.464459   59208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:14.464531   59208 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464570   59208 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.464581   59208 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:14.464592   59208 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464610   59208 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464636   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:14.464670   59208 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:14.464672   59208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-601445"
	W0719 15:48:14.464684   59208 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:14.464613   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.464740   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.465050   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465111   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465151   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465178   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465235   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.466230   59208 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:11.217150   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:11.217605   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:11.217634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:11.217561   60153 retry.go:31] will retry after 3.406637692s: waiting for machine to come up
	I0719 15:48:14.467899   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:14.481294   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0719 15:48:14.481538   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0719 15:48:14.481541   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0719 15:48:14.481658   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.482122   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482145   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482363   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482387   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482461   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482478   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482590   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482704   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482762   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482853   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.483131   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483159   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.483199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483217   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.486437   59208 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.486462   59208 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:14.486492   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.486893   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.486932   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.498388   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0719 15:48:14.498897   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0719 15:48:14.498952   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499251   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499660   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499678   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.499838   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499853   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.500068   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500168   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500232   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.500410   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.501505   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0719 15:48:14.501876   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.502391   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.502413   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.502456   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.502745   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.503006   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.503314   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.503341   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.505162   59208 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:14.505166   59208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:12.084791   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.582986   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.506465   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:14.506487   59208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:14.506506   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.506585   59208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.506604   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:14.506628   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.510227   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511092   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511134   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511207   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511231   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511257   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511370   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511390   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511570   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511574   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511662   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.511713   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511787   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511840   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.520612   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0719 15:48:14.521013   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.521451   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.521470   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.521817   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.522016   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.523622   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.523862   59208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.523876   59208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:14.523895   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.526426   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.526882   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.526941   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.527060   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.527190   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.527344   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.527439   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.674585   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:14.693700   59208 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:14.752990   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.856330   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:14.856350   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:14.884762   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:14.884784   59208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:14.895548   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.915815   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:14.915844   59208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:14.979442   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:15.098490   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098517   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098869   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.098893   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.098902   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.099141   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.099158   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.105078   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.105252   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.105506   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.105526   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.802868   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.802892   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803265   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803279   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.803285   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.803517   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803530   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803577   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.905945   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.905972   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906244   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906266   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906266   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.906275   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.906283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906484   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906496   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906511   59208 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:15.908671   59208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.910057   59208 addons.go:510] duration metric: took 1.445597408s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 15:48:16.697266   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:18.698379   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.627319   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:14.627800   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:14.627822   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:14.627767   60153 retry.go:31] will retry after 4.38444645s: waiting for machine to come up
	I0719 15:48:19.016073   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016711   58376 main.go:141] libmachine: (embed-certs-817144) Found IP for machine: 192.168.72.37
	I0719 15:48:19.016742   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has current primary IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016749   58376 main.go:141] libmachine: (embed-certs-817144) Reserving static IP address...
	I0719 15:48:19.017180   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.017204   58376 main.go:141] libmachine: (embed-certs-817144) Reserved static IP address: 192.168.72.37
	I0719 15:48:19.017222   58376 main.go:141] libmachine: (embed-certs-817144) DBG | skip adding static IP to network mk-embed-certs-817144 - found existing host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"}
	I0719 15:48:19.017239   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Getting to WaitForSSH function...
	I0719 15:48:19.017254   58376 main.go:141] libmachine: (embed-certs-817144) Waiting for SSH to be available...
	I0719 15:48:19.019511   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.019867   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.019896   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.020064   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH client type: external
	I0719 15:48:19.020080   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa (-rw-------)
	I0719 15:48:19.020107   58376 main.go:141] libmachine: (embed-certs-817144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:48:19.020115   58376 main.go:141] libmachine: (embed-certs-817144) DBG | About to run SSH command:
	I0719 15:48:19.020124   58376 main.go:141] libmachine: (embed-certs-817144) DBG | exit 0
	I0719 15:48:19.150328   58376 main.go:141] libmachine: (embed-certs-817144) DBG | SSH cmd err, output: <nil>: 
	I0719 15:48:19.150676   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetConfigRaw
	I0719 15:48:19.151317   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.154087   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154600   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.154634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154907   58376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:48:19.155143   58376 machine.go:94] provisionDockerMachine start ...
	I0719 15:48:19.155168   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:19.155369   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.157741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.158060   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158175   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.158368   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158618   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158769   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.158945   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.159144   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.159161   58376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:48:19.274836   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:48:19.274863   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275148   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:48:19.275174   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275373   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.278103   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278489   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.278518   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.278892   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279111   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279299   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.279577   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.279798   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.279815   58376 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-817144 && echo "embed-certs-817144" | sudo tee /etc/hostname
	I0719 15:48:19.413956   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-817144
	
	I0719 15:48:19.413988   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.416836   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.417196   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417408   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.417599   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417777   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417911   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.418083   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.418274   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.418290   58376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-817144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-817144/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-817144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:48:16.583538   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.083431   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.541400   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:48:19.541439   58376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:48:19.541464   58376 buildroot.go:174] setting up certificates
	I0719 15:48:19.541478   58376 provision.go:84] configureAuth start
	I0719 15:48:19.541495   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.541801   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.544209   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544579   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.544608   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544766   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.547206   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.547570   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547714   58376 provision.go:143] copyHostCerts
	I0719 15:48:19.547772   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:48:19.547782   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:48:19.547827   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:48:19.547939   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:48:19.547949   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:48:19.547969   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:48:19.548024   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:48:19.548031   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:48:19.548047   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:48:19.548093   58376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.embed-certs-817144 san=[127.0.0.1 192.168.72.37 embed-certs-817144 localhost minikube]
	I0719 15:48:20.024082   58376 provision.go:177] copyRemoteCerts
	I0719 15:48:20.024137   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:48:20.024157   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.026940   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027322   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.027358   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027541   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.027819   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.028011   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.028165   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.117563   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:48:20.144428   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:48:20.171520   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:48:20.195188   58376 provision.go:87] duration metric: took 653.6924ms to configureAuth
	I0719 15:48:20.195215   58376 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:48:20.195432   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:20.195518   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.198648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.198970   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.199007   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.199126   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.199335   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199527   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199687   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.199849   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.200046   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.200063   58376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:48:20.502753   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:48:20.502782   58376 machine.go:97] duration metric: took 1.347623735s to provisionDockerMachine
	I0719 15:48:20.502794   58376 start.go:293] postStartSetup for "embed-certs-817144" (driver="kvm2")
	I0719 15:48:20.502805   58376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:48:20.502821   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.503204   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:48:20.503248   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.506142   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.506563   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506697   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.506938   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.507125   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.507258   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.593356   58376 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:48:20.597843   58376 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:48:20.597877   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:48:20.597948   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:48:20.598048   58376 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:48:20.598164   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:48:20.607951   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:20.634860   58376 start.go:296] duration metric: took 132.043928ms for postStartSetup
	I0719 15:48:20.634900   58376 fix.go:56] duration metric: took 20.891722874s for fixHost
	I0719 15:48:20.634919   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.637846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638181   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.638218   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638439   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.638674   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.638884   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.639054   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.639256   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.639432   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.639444   58376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:48:20.755076   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404100.730818472
	
	I0719 15:48:20.755107   58376 fix.go:216] guest clock: 1721404100.730818472
	I0719 15:48:20.755115   58376 fix.go:229] Guest: 2024-07-19 15:48:20.730818472 +0000 UTC Remote: 2024-07-19 15:48:20.634903926 +0000 UTC m=+356.193225446 (delta=95.914546ms)
	I0719 15:48:20.755134   58376 fix.go:200] guest clock delta is within tolerance: 95.914546ms
	I0719 15:48:20.755139   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 21.011996674s
	I0719 15:48:20.755171   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.755465   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:20.758255   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758621   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.758644   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758861   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759348   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759545   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759656   58376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:48:20.759720   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.759780   58376 ssh_runner.go:195] Run: cat /version.json
	I0719 15:48:20.759802   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.762704   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.762833   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763161   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763202   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763399   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763493   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763545   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763608   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763693   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763772   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764001   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763996   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.764156   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764278   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.867430   58376 ssh_runner.go:195] Run: systemctl --version
	I0719 15:48:20.873463   58376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:21.029369   58376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:21.035953   58376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:21.036028   58376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:21.054352   58376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:21.054381   58376 start.go:495] detecting cgroup driver to use...
	I0719 15:48:21.054440   58376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:21.071903   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:21.088624   58376 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:21.088688   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:21.104322   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:21.120089   58376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:21.242310   58376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:21.422514   58376 docker.go:233] disabling docker service ...
	I0719 15:48:21.422589   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:21.439213   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:21.454361   58376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:21.577118   58376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:21.704150   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:21.719160   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:21.738765   58376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:21.738817   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.750720   58376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:21.750798   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.763190   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.775630   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.787727   58376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:21.799520   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.812016   58376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.830564   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.841770   58376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:21.851579   58376 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:21.851651   58376 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:21.864529   58376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:21.874301   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:21.994669   58376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:22.131448   58376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:22.131521   58376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:22.137328   58376 start.go:563] Will wait 60s for crictl version
	I0719 15:48:22.137391   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:48:22.141409   58376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:22.182947   58376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:22.183029   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.217804   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.252450   58376 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.197350   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:22.197536   59208 node_ready.go:49] node "default-k8s-diff-port-601445" has status "Ready":"True"
	I0719 15:48:22.197558   59208 node_ready.go:38] duration metric: took 7.503825721s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:22.197568   59208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:22.203380   59208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:24.211899   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:22.253862   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:22.256397   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256763   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:22.256791   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256968   58376 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:22.261184   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:22.274804   58376 kubeadm.go:883] updating cluster {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:22.274936   58376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:22.274994   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:22.317501   58376 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:22.317559   58376 ssh_runner.go:195] Run: which lz4
	I0719 15:48:22.321646   58376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:22.326455   58376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:22.326478   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:23.820083   58376 crio.go:462] duration metric: took 1.498469232s to copy over tarball
	I0719 15:48:23.820155   58376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:48:21.583230   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.585191   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.710838   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.786269   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:26.105248   58376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285062307s)
	I0719 15:48:26.105271   58376 crio.go:469] duration metric: took 2.285164513s to extract the tarball
	I0719 15:48:26.105279   58376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:26.142811   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:26.185631   58376 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:26.185660   58376 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:26.185668   58376 kubeadm.go:934] updating node { 192.168.72.37 8443 v1.30.3 crio true true} ...
	I0719 15:48:26.185784   58376 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-817144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:26.185857   58376 ssh_runner.go:195] Run: crio config
	I0719 15:48:26.238150   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:26.238172   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:26.238183   58376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:26.238211   58376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-817144 NodeName:embed-certs-817144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:26.238449   58376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-817144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:26.238515   58376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:26.249200   58376 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:26.249278   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:26.258710   58376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 15:48:26.279235   58376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:26.299469   58376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 15:48:26.317789   58376 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:26.321564   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:26.333153   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:26.452270   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:26.469344   58376 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144 for IP: 192.168.72.37
	I0719 15:48:26.469366   58376 certs.go:194] generating shared ca certs ...
	I0719 15:48:26.469382   58376 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:26.469530   58376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:26.469586   58376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:26.469601   58376 certs.go:256] generating profile certs ...
	I0719 15:48:26.469694   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/client.key
	I0719 15:48:26.469791   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key.928d4c24
	I0719 15:48:26.469846   58376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key
	I0719 15:48:26.469982   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:26.470021   58376 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:26.470035   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:26.470071   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:26.470105   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:26.470140   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:26.470197   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:26.470812   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:26.508455   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:26.537333   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:26.565167   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:26.601152   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 15:48:26.636408   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:26.669076   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:26.695438   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:26.718897   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:26.741760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:26.764760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:26.787772   58376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:26.807332   58376 ssh_runner.go:195] Run: openssl version
	I0719 15:48:26.815182   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:26.827373   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831926   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831973   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.837923   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:26.849158   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:26.860466   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865178   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865249   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.870873   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:26.882044   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:26.893283   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897750   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897809   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.903395   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:26.914389   58376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:26.918904   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:26.924659   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:26.930521   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:26.936808   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:26.942548   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:26.948139   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:26.954557   58376 kubeadm.go:392] StartCluster: {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:26.954644   58376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:26.954722   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:26.994129   58376 cri.go:89] found id: ""
	I0719 15:48:26.994205   58376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:27.006601   58376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:27.006624   58376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:27.006699   58376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:27.017166   58376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:27.018580   58376 kubeconfig.go:125] found "embed-certs-817144" server: "https://192.168.72.37:8443"
	I0719 15:48:27.021622   58376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:27.033000   58376 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.37
	I0719 15:48:27.033033   58376 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:27.033044   58376 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:27.033083   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:27.073611   58376 cri.go:89] found id: ""
	I0719 15:48:27.073678   58376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:27.092986   58376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:27.103557   58376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:27.103580   58376 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:27.103636   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:48:27.113687   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:27.113752   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:27.123696   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:48:27.132928   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:27.132984   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:27.142566   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.152286   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:27.152335   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.161701   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:48:27.171532   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:27.171591   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:27.181229   58376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:27.192232   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:27.330656   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.287561   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.513476   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.616308   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.704518   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:28.704605   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.205265   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.082992   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.746255   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.704706   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.204728   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.221741   58376 api_server.go:72] duration metric: took 1.517220815s to wait for apiserver process to appear ...
	I0719 15:48:30.221766   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:30.221786   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.665104   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.665138   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.665152   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.703238   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.703271   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.722495   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.748303   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:32.748344   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.222861   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.227076   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.227104   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.722705   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.734658   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.734683   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:34.222279   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:34.227870   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:48:34.233621   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:34.233646   58376 api_server.go:131] duration metric: took 4.011873202s to wait for apiserver health ...
	I0719 15:48:34.233656   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:34.233664   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:34.235220   58376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:30.210533   59208 pod_ready.go:92] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.210557   59208 pod_ready.go:81] duration metric: took 8.007151724s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.210568   59208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215669   59208 pod_ready.go:92] pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.215692   59208 pod_ready.go:81] duration metric: took 5.116005ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215702   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222633   59208 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.222655   59208 pod_ready.go:81] duration metric: took 6.947228ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222664   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227631   59208 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.227656   59208 pod_ready.go:81] duration metric: took 4.985227ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227667   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405047   59208 pod_ready.go:92] pod "kube-proxy-r7b2z" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.405073   59208 pod_ready.go:81] duration metric: took 177.397954ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405085   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805843   59208 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.805877   59208 pod_ready.go:81] duration metric: took 400.783803ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805890   59208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:32.821231   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.236303   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:34.248133   58376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:34.270683   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:34.279907   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:34.279939   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:34.279946   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:34.279953   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:34.279960   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:34.279966   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:34.279972   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:34.279977   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:34.279982   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:34.279988   58376 system_pods.go:74] duration metric: took 9.282886ms to wait for pod list to return data ...
	I0719 15:48:34.279995   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:34.283597   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:34.283623   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:34.283634   58376 node_conditions.go:105] duration metric: took 3.634999ms to run NodePressure ...
	I0719 15:48:34.283649   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:31.082803   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:33.583510   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.586116   58376 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590095   58376 kubeadm.go:739] kubelet initialised
	I0719 15:48:34.590119   58376 kubeadm.go:740] duration metric: took 3.977479ms waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590128   58376 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:34.594987   58376 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.600192   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600212   58376 pod_ready.go:81] duration metric: took 5.205124ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.600220   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600225   58376 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.603934   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603952   58376 pod_ready.go:81] duration metric: took 3.719853ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.603959   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603965   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.607778   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607803   58376 pod_ready.go:81] duration metric: took 3.830174ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.607817   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607826   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.673753   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673775   58376 pod_ready.go:81] duration metric: took 65.937586ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.673783   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673788   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.075506   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075539   58376 pod_ready.go:81] duration metric: took 401.743578ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.075548   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075554   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.474518   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474546   58376 pod_ready.go:81] duration metric: took 398.985628ms for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.474558   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474567   58376 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.874540   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874567   58376 pod_ready.go:81] duration metric: took 399.989978ms for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.874576   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874582   58376 pod_ready.go:38] duration metric: took 1.284443879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:35.874646   58376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:35.886727   58376 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:35.886751   58376 kubeadm.go:597] duration metric: took 8.880120513s to restartPrimaryControlPlane
	I0719 15:48:35.886760   58376 kubeadm.go:394] duration metric: took 8.932210528s to StartCluster
	I0719 15:48:35.886781   58376 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.886859   58376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:35.888389   58376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.888642   58376 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:35.888722   58376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:35.888781   58376 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-817144"
	I0719 15:48:35.888810   58376 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-817144"
	I0719 15:48:35.888824   58376 addons.go:69] Setting default-storageclass=true in profile "embed-certs-817144"
	I0719 15:48:35.888839   58376 addons.go:69] Setting metrics-server=true in profile "embed-certs-817144"
	I0719 15:48:35.888875   58376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-817144"
	I0719 15:48:35.888888   58376 addons.go:234] Setting addon metrics-server=true in "embed-certs-817144"
	W0719 15:48:35.888897   58376 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:35.888931   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.888840   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0719 15:48:35.888843   58376 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:35.889000   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.889231   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889242   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889247   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889270   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889272   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889282   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.890641   58376 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:35.892144   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:35.905134   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0719 15:48:35.905572   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.905788   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0719 15:48:35.906107   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906132   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.906171   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.906496   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.906825   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906846   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.907126   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.907179   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.907215   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.907289   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.908269   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0719 15:48:35.908747   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.909343   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.909367   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.909787   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.910337   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910382   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.910615   58376 addons.go:234] Setting addon default-storageclass=true in "embed-certs-817144"
	W0719 15:48:35.910632   58376 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:35.910662   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.910937   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910965   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.926165   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 15:48:35.926905   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.926944   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0719 15:48:35.927369   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.927573   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927636   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927829   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927847   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927959   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928512   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.928551   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.928759   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928824   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 15:48:35.928964   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.929176   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.929546   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.929557   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.929927   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.930278   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.931161   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.931773   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.933234   58376 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:35.933298   58376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:35.934543   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:35.934556   58376 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:35.934569   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.934629   58376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:35.934642   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:35.934657   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.938300   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938628   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.938648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938679   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939150   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939340   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.939433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.939479   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939536   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.939619   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939673   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.939937   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.940081   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.940190   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.947955   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0719 15:48:35.948206   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.948643   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.948654   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.948961   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.949119   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.950572   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.951770   58376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:35.951779   58376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:35.951791   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.957009   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957381   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.957405   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957550   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.957717   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.957841   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.957953   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:36.072337   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:36.091547   58376 node_ready.go:35] waiting up to 6m0s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:36.182328   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:36.195704   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:36.195729   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:36.221099   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:36.224606   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:36.224632   58376 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:36.247264   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:36.247289   58376 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:36.300365   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:37.231670   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010526005s)
	I0719 15:48:37.231729   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231743   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.231765   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049406285s)
	I0719 15:48:37.231807   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231822   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232034   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232085   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232096   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.232100   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232105   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.232115   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232345   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232366   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233486   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233529   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233541   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.233549   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.233792   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233815   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233832   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.240487   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.240502   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.240732   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.240754   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.240755   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288064   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288085   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288370   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288389   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288378   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288400   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288406   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288595   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288606   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288652   58376 addons.go:475] Verifying addon metrics-server=true in "embed-certs-817144"
	I0719 15:48:37.290497   58376 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.314792   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.814653   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.291961   58376 addons.go:510] duration metric: took 1.403238435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:48:38.096793   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.584345   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.585215   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.818959   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.313745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:44.314213   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:40.596246   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.095976   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.595640   58376 node_ready.go:49] node "embed-certs-817144" has status "Ready":"True"
	I0719 15:48:43.595659   58376 node_ready.go:38] duration metric: took 7.504089345s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:43.595667   58376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:43.600832   58376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605878   58376 pod_ready.go:92] pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.605900   58376 pod_ready.go:81] duration metric: took 5.046391ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605912   58376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610759   58376 pod_ready.go:92] pod "etcd-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.610778   58376 pod_ready.go:81] duration metric: took 4.85915ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610788   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615239   58376 pod_ready.go:92] pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.615257   58376 pod_ready.go:81] duration metric: took 4.46126ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615267   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619789   58376 pod_ready.go:92] pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.619804   58376 pod_ready.go:81] duration metric: took 4.530085ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619814   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998585   58376 pod_ready.go:92] pod "kube-proxy-4d4g9" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.998612   58376 pod_ready.go:81] duration metric: took 378.78761ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998622   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:40.084033   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.582983   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.812904   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.313178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:46.004415   58376 pod_ready.go:102] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.006304   58376 pod_ready.go:92] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:48.006329   58376 pod_ready.go:81] duration metric: took 4.00769937s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:48.006339   58376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:45.082973   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:47.582224   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.582782   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:51.814049   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:53.815503   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:50.015637   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:52.515491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:51.583726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.083179   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.816000   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.817771   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:55.014213   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.014730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:56.083381   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.088572   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:00.313552   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:02.812079   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:59.513087   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:01.514094   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.013514   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:00.583159   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:03.082968   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:05.312525   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.812891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:06.013654   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:08.015552   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:05.083931   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.583371   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:09.824389   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.312960   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.512671   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.513359   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.082891   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:14.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:14.813090   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.311701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:15.014386   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.513993   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:16.584566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.082569   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:19.814129   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.814762   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:23.817102   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:20.012767   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:22.512467   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.587074   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:24.082829   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:26.312496   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.312687   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.015437   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:27.514515   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:26.084854   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.584103   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:30.313153   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:32.812075   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:29.514963   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.515163   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.014174   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.083793   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:33.083838   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:34.812542   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.311929   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.312244   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:36.513892   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.013261   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:35.084098   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.587696   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:41.313207   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.815916   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:41.013495   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.513445   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:40.082726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:42.583599   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.584503   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:46.313534   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.811536   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:46.012299   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.515396   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:47.082848   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:49.083291   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.813781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:52.817124   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.516602   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.012716   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:51.083390   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.583030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:55.312032   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.813778   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:55.013719   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.014070   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:56.083506   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:58.582593   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:59.815894   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.312541   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.513158   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.013500   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:00.583268   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:03.082967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:04.814326   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.314104   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:04.513144   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.013900   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.014269   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.582967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.583076   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.583550   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:09.813831   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.815120   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.815551   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.512872   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.514351   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.584717   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.082745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.815701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:17.816052   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.012834   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.014504   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.582156   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.583011   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:20.312912   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:22.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:20.513572   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.014103   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:21.082689   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.583483   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:25.312127   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.312599   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.512955   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.515102   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.583597   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:28.083843   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:29.815683   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.312009   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.312309   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.013332   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.013381   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.082937   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:36.812745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.312184   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.513321   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:36.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.012035   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:35.084310   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:37.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:41.313263   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.816257   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:41.014458   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.017012   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:40.083591   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:42.582246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:44.582857   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:46.312320   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.312805   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.512849   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.013822   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:46.582906   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.583537   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:50.815488   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.312626   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:50.013996   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:52.514493   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:51.082358   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.582566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:50:55.814460   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.313739   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:55.014039   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:57.513248   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:56.082876   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.583172   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:00.812445   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.813629   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.011751   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.013062   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.013473   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.584028   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:03.082149   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:05.312865   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.816945   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:06.513634   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.012283   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:05.084185   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.583429   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.583944   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:10.315941   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:12.812732   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.013749   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.513338   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.584335   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:14.083745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:15.311404   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:17.312317   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.013193   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:18.014317   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.583403   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.082807   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.812659   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.813178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.311781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:20.512610   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:22.512707   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.083030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:23.583501   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:26.312416   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.313406   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.513171   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:27.012377   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:29.014890   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.583785   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.083633   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:30.811822   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.813013   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:31.512155   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.012636   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:30.083916   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.582845   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.582945   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.313638   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:37.813400   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.013415   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.513387   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.583140   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:39.084770   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:40.312909   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:42.812703   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.011956   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:43.513117   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.584336   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.082447   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.813328   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:47.318119   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.013597   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.513037   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.083435   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.582222   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:51:49.811847   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:51.812747   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.312028   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.514497   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:53.012564   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.585244   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:52.587963   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.576923   58417 pod_ready.go:81] duration metric: took 4m0.000887015s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	E0719 15:51:54.576954   58417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 15:51:54.576979   58417 pod_ready.go:38] duration metric: took 4m10.045017696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:51:54.577013   58417 kubeadm.go:597] duration metric: took 4m18.572474217s to restartPrimaryControlPlane
	W0719 15:51:54.577075   58417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:54.577107   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:56.314112   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:58.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:55.012915   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:57.512491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:01.312620   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:03.812880   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:59.512666   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:02.013784   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.314545   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:08.811891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:04.512583   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:09.016808   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:10.813197   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:13.313167   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:11.513329   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:14.012352   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:15.812105   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:17.812843   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:16.014362   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:18.513873   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:20.685347   58417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.108209289s)
	I0719 15:52:20.685431   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:20.699962   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:20.709728   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:20.719022   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:20.719038   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:52:20.719074   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:52:20.727669   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:52:20.727731   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:52:20.736851   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:52:20.745821   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:52:20.745867   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:52:20.755440   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.764307   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:52:20.764360   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.773759   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:52:20.782354   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:52:20.782420   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:52:20.791186   58417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:20.837700   58417 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 15:52:20.837797   58417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:52:20.958336   58417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:20.958486   58417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:20.958629   58417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 15:52:20.967904   58417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:20.969995   58417 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:20.970097   58417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:52:20.970197   58417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:20.970325   58417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:52:20.970438   58417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:52:20.970550   58417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:52:20.970633   58417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:52:20.970740   58417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:52:20.970840   58417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:52:20.970949   58417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:52:20.971049   58417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:52:20.971106   58417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:52:20.971184   58417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:21.175226   58417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:21.355994   58417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 15:52:21.453237   58417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:21.569014   58417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:21.672565   58417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:21.673036   58417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:21.675860   58417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:20.312428   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:22.312770   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:24.314183   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.013099   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:23.512341   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.677594   58417 out.go:204]   - Booting up control plane ...
	I0719 15:52:21.677694   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:21.677787   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:21.677894   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:21.695474   58417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:21.701352   58417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:21.701419   58417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:52:21.831941   58417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 15:52:21.832046   58417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 15:52:22.333073   58417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.399393ms
	I0719 15:52:22.333184   58417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 15:52:27.336964   58417 kubeadm.go:310] [api-check] The API server is healthy after 5.002306078s
	I0719 15:52:27.348152   58417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:27.366916   58417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:27.396214   58417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:27.396475   58417 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-382231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:27.408607   58417 kubeadm.go:310] [bootstrap-token] Using token: xdoy2n.29347ekmgral9ki3
	I0719 15:52:27.409857   58417 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:27.409991   58417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:27.415553   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:27.424772   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:27.428421   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:27.439922   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:27.443985   58417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:27.742805   58417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:28.253742   58417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 15:52:28.744380   58417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 15:52:28.744405   58417 kubeadm.go:310] 
	I0719 15:52:28.744486   58417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:28.744498   58417 kubeadm.go:310] 
	I0719 15:52:28.744581   58417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:28.744588   58417 kubeadm.go:310] 
	I0719 15:52:28.744633   58417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 15:52:28.744704   58417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:28.744783   58417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:28.744794   58417 kubeadm.go:310] 
	I0719 15:52:28.744877   58417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 15:52:28.744891   58417 kubeadm.go:310] 
	I0719 15:52:28.744944   58417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:28.744951   58417 kubeadm.go:310] 
	I0719 15:52:28.744992   58417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 15:52:28.745082   58417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:28.745172   58417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:28.745181   58417 kubeadm.go:310] 
	I0719 15:52:28.745253   58417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:28.745319   58417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 15:52:28.745332   58417 kubeadm.go:310] 
	I0719 15:52:28.745412   58417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745499   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 15:52:28.745518   58417 kubeadm.go:310] 	--control-plane 
	I0719 15:52:28.745525   58417 kubeadm.go:310] 
	I0719 15:52:28.745599   58417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:28.745609   58417 kubeadm.go:310] 
	I0719 15:52:28.745677   58417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745778   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 15:52:28.747435   58417 kubeadm.go:310] W0719 15:52:20.814208    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747697   58417 kubeadm.go:310] W0719 15:52:20.814905    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747795   58417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:28.747815   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:52:28.747827   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:52:28.749619   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:26.813409   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.814040   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:25.513048   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:27.514730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.750992   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:28.762976   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:52:28.783894   58417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:28.783972   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.783989   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-382231 minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=no-preload-382231 minikube.k8s.io/primary=true
	I0719 15:52:28.808368   58417 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:29.005658   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.505702   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.005765   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.505834   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.005837   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.506329   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.006419   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.505701   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.005735   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.130121   58417 kubeadm.go:1113] duration metric: took 4.346215264s to wait for elevateKubeSystemPrivileges
	I0719 15:52:33.130162   58417 kubeadm.go:394] duration metric: took 4m57.173876302s to StartCluster
	I0719 15:52:33.130187   58417 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.130290   58417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:52:33.131944   58417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.132178   58417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:52:33.132237   58417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:52:33.132339   58417 addons.go:69] Setting storage-provisioner=true in profile "no-preload-382231"
	I0719 15:52:33.132358   58417 addons.go:69] Setting default-storageclass=true in profile "no-preload-382231"
	I0719 15:52:33.132381   58417 addons.go:234] Setting addon storage-provisioner=true in "no-preload-382231"
	I0719 15:52:33.132385   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0719 15:52:33.132391   58417 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:52:33.132392   58417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-382231"
	I0719 15:52:33.132419   58417 addons.go:69] Setting metrics-server=true in profile "no-preload-382231"
	I0719 15:52:33.132423   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132444   58417 addons.go:234] Setting addon metrics-server=true in "no-preload-382231"
	W0719 15:52:33.132452   58417 addons.go:243] addon metrics-server should already be in state true
	I0719 15:52:33.132474   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132740   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132763   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132799   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132810   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132822   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132829   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.134856   58417 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:33.136220   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:33.149028   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0719 15:52:33.149128   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0719 15:52:33.149538   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.149646   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.150093   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150108   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150111   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150119   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150477   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150603   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150955   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.150971   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.151326   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 15:52:33.151359   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.151715   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.152199   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.152223   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.152574   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.153136   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.153170   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.155187   58417 addons.go:234] Setting addon default-storageclass=true in "no-preload-382231"
	W0719 15:52:33.155207   58417 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:52:33.155235   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.155572   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.155602   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.170886   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0719 15:52:33.170884   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 15:52:33.171439   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.171510   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0719 15:52:33.171543   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172005   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172026   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172109   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172141   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172162   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172538   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172552   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172609   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172775   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.172831   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172875   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.173021   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.173381   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.173405   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.175118   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.175500   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.177023   58417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:52:33.177041   58417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:33.178348   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:33.178362   58417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:33.178377   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.178450   58417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.178469   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:52:33.178486   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.182287   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182598   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.182617   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182741   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.182948   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.183074   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.183204   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.183372   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183940   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.183959   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183994   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.184237   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.184356   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.184505   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.191628   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 15:52:33.191984   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.192366   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.192385   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.192707   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.192866   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.194285   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.194485   58417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.194499   58417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:33.194514   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.197526   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.197853   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.197872   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.198087   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.198335   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.198472   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.198604   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.382687   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:52:33.403225   58417 node_ready.go:35] waiting up to 6m0s for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430507   58417 node_ready.go:49] node "no-preload-382231" has status "Ready":"True"
	I0719 15:52:33.430535   58417 node_ready.go:38] duration metric: took 27.282654ms for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430546   58417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:33.482352   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.555210   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.565855   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:33.565874   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:52:33.571653   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.609541   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:33.609569   58417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:33.674428   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:33.674455   58417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:33.746703   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:34.092029   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092051   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092341   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092359   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.092369   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092379   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092604   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092628   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.092634   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.093766   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.093785   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094025   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094043   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094076   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.094088   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094325   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094343   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094349   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128393   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.128412   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.128715   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128766   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.128775   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.319737   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.319764   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320141   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320161   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320165   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.320184   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.320199   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320441   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320462   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320475   58417 addons.go:475] Verifying addon metrics-server=true in "no-preload-382231"
	I0719 15:52:34.320482   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.322137   58417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:52:30.812091   59208 pod_ready.go:81] duration metric: took 4m0.006187238s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:30.812113   59208 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:30.812120   59208 pod_ready.go:38] duration metric: took 4m8.614544303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.812135   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:30.812161   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:30.812208   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:30.861054   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:30.861074   59208 cri.go:89] found id: ""
	I0719 15:52:30.861083   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:30.861144   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.865653   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:30.865708   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:30.900435   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:30.900459   59208 cri.go:89] found id: ""
	I0719 15:52:30.900468   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:30.900512   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.904686   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:30.904747   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:30.950618   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.950638   59208 cri.go:89] found id: ""
	I0719 15:52:30.950646   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:30.950691   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.955080   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:30.955147   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:30.996665   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:30.996691   59208 cri.go:89] found id: ""
	I0719 15:52:30.996704   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:30.996778   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.001122   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:31.001191   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:31.042946   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.042969   59208 cri.go:89] found id: ""
	I0719 15:52:31.042979   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:31.043039   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.047311   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:31.047365   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:31.086140   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.086166   59208 cri.go:89] found id: ""
	I0719 15:52:31.086175   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:31.086230   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.091742   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:31.091818   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:31.134209   59208 cri.go:89] found id: ""
	I0719 15:52:31.134241   59208 logs.go:276] 0 containers: []
	W0719 15:52:31.134252   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:31.134260   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:31.134316   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:31.173297   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.173325   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.173331   59208 cri.go:89] found id: ""
	I0719 15:52:31.173353   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:31.173414   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.177951   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.182099   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:31.182121   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:31.196541   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:31.196565   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:31.322528   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:31.322555   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:31.369628   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:31.369658   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:31.417834   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:31.417867   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:31.459116   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:31.459145   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.500986   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:31.501018   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.578557   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:31.578606   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:31.635053   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:31.635082   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:31.692604   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:31.692635   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.729765   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:31.729801   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.766152   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:31.766177   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:32.301240   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:32.301278   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.013083   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:32.013142   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:34.323358   58417 addons.go:510] duration metric: took 1.19112329s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:34.849019   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:34.866751   59208 api_server.go:72] duration metric: took 4m20.402312557s to wait for apiserver process to appear ...
	I0719 15:52:34.866779   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:34.866816   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:34.866876   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:34.905505   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.905532   59208 cri.go:89] found id: ""
	I0719 15:52:34.905542   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:34.905609   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.910996   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:34.911069   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:34.958076   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:34.958100   59208 cri.go:89] found id: ""
	I0719 15:52:34.958110   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:34.958166   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.962439   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:34.962507   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:34.999095   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:34.999117   59208 cri.go:89] found id: ""
	I0719 15:52:34.999126   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:34.999178   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.003785   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:35.003848   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:35.042585   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.042613   59208 cri.go:89] found id: ""
	I0719 15:52:35.042622   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:35.042683   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.048705   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:35.048770   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:35.092408   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.092435   59208 cri.go:89] found id: ""
	I0719 15:52:35.092444   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:35.092499   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.096983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:35.097050   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:35.135694   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.135717   59208 cri.go:89] found id: ""
	I0719 15:52:35.135726   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:35.135782   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.140145   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:35.140223   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:35.178912   59208 cri.go:89] found id: ""
	I0719 15:52:35.178938   59208 logs.go:276] 0 containers: []
	W0719 15:52:35.178948   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:35.178955   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:35.179015   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:35.229067   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.229090   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.229104   59208 cri.go:89] found id: ""
	I0719 15:52:35.229112   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:35.229172   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.234985   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.240098   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:35.240120   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:35.299418   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:35.299449   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:35.316294   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:35.316330   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:35.433573   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:35.433610   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:35.479149   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:35.479181   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:35.526270   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:35.526299   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.564209   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:35.564241   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.601985   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:35.602020   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.669986   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:35.670015   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.711544   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:35.711580   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:35.763800   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:35.763831   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:35.822699   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:35.822732   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.863377   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:35.863422   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:38.777749   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:52:38.781984   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:52:38.782935   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:38.782955   59208 api_server.go:131] duration metric: took 3.916169938s to wait for apiserver health ...
	I0719 15:52:38.782963   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:38.782983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:38.783026   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:38.818364   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:38.818387   59208 cri.go:89] found id: ""
	I0719 15:52:38.818395   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:38.818442   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.823001   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:38.823054   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:38.857871   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:38.857900   59208 cri.go:89] found id: ""
	I0719 15:52:38.857909   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:38.857958   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.864314   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:38.864375   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:38.910404   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:38.910434   59208 cri.go:89] found id: ""
	I0719 15:52:38.910445   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:38.910505   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.915588   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:38.915645   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:38.952981   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:38.953002   59208 cri.go:89] found id: ""
	I0719 15:52:38.953009   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:38.953055   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.957397   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:38.957447   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:39.002973   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.003001   59208 cri.go:89] found id: ""
	I0719 15:52:39.003011   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:39.003059   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.007496   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:39.007568   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:39.045257   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.045282   59208 cri.go:89] found id: ""
	I0719 15:52:39.045291   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:39.045351   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.049358   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:39.049415   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:39.083263   59208 cri.go:89] found id: ""
	I0719 15:52:39.083303   59208 logs.go:276] 0 containers: []
	W0719 15:52:39.083314   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:39.083321   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:39.083391   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:39.121305   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.121348   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.121354   59208 cri.go:89] found id: ""
	I0719 15:52:39.121363   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:39.121421   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.126259   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.130395   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:39.130413   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:39.171213   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:39.171239   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.206545   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:39.206577   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:39.267068   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:39.267105   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:39.373510   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:39.373544   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.512374   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.012559   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:39.013766   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:35.495479   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.989424   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:38.489746   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.489775   58417 pod_ready.go:81] duration metric: took 5.007393051s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.489790   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495855   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.495884   58417 pod_ready.go:81] duration metric: took 6.085398ms for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495895   58417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:40.502651   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:41.503286   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.503309   58417 pod_ready.go:81] duration metric: took 3.007406201s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.503321   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513225   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.513245   58417 pod_ready.go:81] duration metric: took 9.916405ms for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513256   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517651   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.517668   58417 pod_ready.go:81] duration metric: took 4.40518ms for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517677   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522529   58417 pod_ready.go:92] pod "kube-proxy-qd84x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.522544   58417 pod_ready.go:81] duration metric: took 4.861257ms for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522551   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687964   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.687987   58417 pod_ready.go:81] duration metric: took 165.428951ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687997   58417 pod_ready.go:38] duration metric: took 8.257437931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:41.688016   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:41.688069   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:41.705213   58417 api_server.go:72] duration metric: took 8.573000368s to wait for apiserver process to appear ...
	I0719 15:52:41.705236   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:41.705256   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:52:41.709425   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:52:41.710427   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:52:41.710447   58417 api_server.go:131] duration metric: took 5.203308ms to wait for apiserver health ...
	I0719 15:52:41.710455   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:41.890063   58417 system_pods.go:59] 9 kube-system pods found
	I0719 15:52:41.890091   58417 system_pods.go:61] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:41.890095   58417 system_pods.go:61] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:41.890099   58417 system_pods.go:61] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:41.890103   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:41.890106   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:41.890109   58417 system_pods.go:61] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:41.890112   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:41.890117   58417 system_pods.go:61] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:41.890121   58417 system_pods.go:61] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:41.890128   58417 system_pods.go:74] duration metric: took 179.666477ms to wait for pod list to return data ...
	I0719 15:52:41.890135   58417 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.086946   58417 default_sa.go:45] found service account: "default"
	I0719 15:52:42.086973   58417 default_sa.go:55] duration metric: took 196.832888ms for default service account to be created ...
	I0719 15:52:42.086984   58417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.289457   58417 system_pods.go:86] 9 kube-system pods found
	I0719 15:52:42.289483   58417 system_pods.go:89] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:42.289489   58417 system_pods.go:89] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:42.289493   58417 system_pods.go:89] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:42.289498   58417 system_pods.go:89] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:42.289502   58417 system_pods.go:89] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:42.289506   58417 system_pods.go:89] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:42.289510   58417 system_pods.go:89] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:42.289518   58417 system_pods.go:89] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.289523   58417 system_pods.go:89] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:42.289530   58417 system_pods.go:126] duration metric: took 202.54151ms to wait for k8s-apps to be running ...
	I0719 15:52:42.289536   58417 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.289575   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.304866   58417 system_svc.go:56] duration metric: took 15.319153ms WaitForService to wait for kubelet
	I0719 15:52:42.304931   58417 kubeadm.go:582] duration metric: took 9.172718104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.304958   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.488087   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.488108   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.488122   58417 node_conditions.go:105] duration metric: took 183.159221ms to run NodePressure ...
	I0719 15:52:42.488135   58417 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.488144   58417 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.488157   58417 start.go:255] writing updated cluster config ...
	I0719 15:52:42.488453   58417 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.536465   58417 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:52:42.538606   58417 out.go:177] * Done! kubectl is now configured to use "no-preload-382231" cluster and "default" namespace by default
	I0719 15:52:39.422000   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:39.422034   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:39.473826   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:39.473860   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:39.515998   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:39.516023   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:39.559475   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:39.559506   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:39.574174   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:39.574205   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.615906   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:39.615933   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.676764   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:39.676795   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.714437   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:39.714467   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:42.584088   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:42.584114   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.584119   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.584123   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.584127   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.584130   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.584133   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.584138   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.584143   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.584150   59208 system_pods.go:74] duration metric: took 3.801182741s to wait for pod list to return data ...
	I0719 15:52:42.584156   59208 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.586910   59208 default_sa.go:45] found service account: "default"
	I0719 15:52:42.586934   59208 default_sa.go:55] duration metric: took 2.771722ms for default service account to be created ...
	I0719 15:52:42.586943   59208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.593611   59208 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:42.593634   59208 system_pods.go:89] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.593639   59208 system_pods.go:89] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.593645   59208 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.593650   59208 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.593654   59208 system_pods.go:89] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.593658   59208 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.593669   59208 system_pods.go:89] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.593673   59208 system_pods.go:89] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.593680   59208 system_pods.go:126] duration metric: took 6.731347ms to wait for k8s-apps to be running ...
	I0719 15:52:42.593687   59208 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.593726   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.615811   59208 system_svc.go:56] duration metric: took 22.114487ms WaitForService to wait for kubelet
	I0719 15:52:42.615841   59208 kubeadm.go:582] duration metric: took 4m28.151407807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.615864   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.619021   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.619040   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.619050   59208 node_conditions.go:105] duration metric: took 3.180958ms to run NodePressure ...
	I0719 15:52:42.619060   59208 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.619067   59208 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.619079   59208 start.go:255] writing updated cluster config ...
	I0719 15:52:42.619329   59208 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.677117   59208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:42.679317   59208 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-601445" cluster and "default" namespace by default
	I0719 15:52:41.514013   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:44.012173   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:46.013717   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:48.013121   58376 pod_ready.go:81] duration metric: took 4m0.006772624s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:48.013143   58376 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:48.013150   58376 pod_ready.go:38] duration metric: took 4m4.417474484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:48.013165   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:48.013194   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:48.013234   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:48.067138   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.067166   58376 cri.go:89] found id: ""
	I0719 15:52:48.067175   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:48.067218   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.071486   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:48.071531   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:48.115491   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.115514   58376 cri.go:89] found id: ""
	I0719 15:52:48.115525   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:48.115583   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.119693   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:48.119750   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:48.161158   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.161185   58376 cri.go:89] found id: ""
	I0719 15:52:48.161194   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:48.161257   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.165533   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:48.165584   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:48.207507   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.207528   58376 cri.go:89] found id: ""
	I0719 15:52:48.207537   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:48.207596   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.212070   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:48.212145   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:48.250413   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.250441   58376 cri.go:89] found id: ""
	I0719 15:52:48.250451   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:48.250510   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.255025   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:48.255095   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:48.289898   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.289922   58376 cri.go:89] found id: ""
	I0719 15:52:48.289930   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:48.289976   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.294440   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:48.294489   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:48.329287   58376 cri.go:89] found id: ""
	I0719 15:52:48.329314   58376 logs.go:276] 0 containers: []
	W0719 15:52:48.329326   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:48.329332   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:48.329394   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:48.373215   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.373242   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.373248   58376 cri.go:89] found id: ""
	I0719 15:52:48.373257   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:48.373311   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.377591   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.381610   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:48.381635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:48.440106   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:48.440148   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:48.455200   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:48.455234   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.496729   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:48.496757   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.535475   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:48.535501   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.592954   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:48.592993   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.635925   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:48.635957   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.671611   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:48.671642   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:48.809648   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:48.809681   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.863327   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:48.863361   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.902200   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:48.902245   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.937497   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:48.937525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:49.446900   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:49.446933   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:51.988535   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:52.005140   58376 api_server.go:72] duration metric: took 4m16.116469116s to wait for apiserver process to appear ...
	I0719 15:52:52.005165   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:52.005206   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:52.005258   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:52.041113   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.041143   58376 cri.go:89] found id: ""
	I0719 15:52:52.041150   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:52.041199   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.045292   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:52.045349   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:52.086747   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.086770   58376 cri.go:89] found id: ""
	I0719 15:52:52.086778   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:52.086821   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.091957   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:52.092015   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:52.128096   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.128128   58376 cri.go:89] found id: ""
	I0719 15:52:52.128138   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:52.128204   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.132889   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:52.132949   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:52.168359   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.168389   58376 cri.go:89] found id: ""
	I0719 15:52:52.168398   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:52.168454   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.172577   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:52.172639   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:52.211667   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.211684   58376 cri.go:89] found id: ""
	I0719 15:52:52.211691   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:52.211740   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.215827   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:52.215893   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:52.252105   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.252130   58376 cri.go:89] found id: ""
	I0719 15:52:52.252140   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:52.252194   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.256407   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:52.256464   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:52.292646   58376 cri.go:89] found id: ""
	I0719 15:52:52.292675   58376 logs.go:276] 0 containers: []
	W0719 15:52:52.292685   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:52.292693   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:52.292755   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:52.326845   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.326875   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.326880   58376 cri.go:89] found id: ""
	I0719 15:52:52.326889   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:52.326946   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.331338   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.335530   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:52.335554   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.371981   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:52.372010   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.406921   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:52.406946   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.442975   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:52.443007   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:52.497838   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:52.497873   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:52.556739   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:52.556776   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:52.665610   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:52.665643   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.711547   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:52.711580   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.759589   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:52.759634   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.807300   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:52.807374   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.857159   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:52.857186   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.917896   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:52.917931   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:53.342603   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:53.342646   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:55.857727   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:52:55.861835   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:52:55.862804   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:55.862822   58376 api_server.go:131] duration metric: took 3.857650801s to wait for apiserver health ...
	I0719 15:52:55.862829   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:55.862852   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:55.862905   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:55.900840   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:55.900859   58376 cri.go:89] found id: ""
	I0719 15:52:55.900866   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:55.900909   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.906205   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:55.906291   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:55.950855   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:55.950879   58376 cri.go:89] found id: ""
	I0719 15:52:55.950887   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:55.950939   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.955407   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:55.955472   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:55.994954   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:55.994981   58376 cri.go:89] found id: ""
	I0719 15:52:55.994992   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:55.995052   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.999179   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:55.999241   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:56.036497   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.036521   58376 cri.go:89] found id: ""
	I0719 15:52:56.036530   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:56.036585   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.041834   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:56.041900   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:56.082911   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.082934   58376 cri.go:89] found id: ""
	I0719 15:52:56.082943   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:56.082998   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.087505   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:56.087571   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:56.124517   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.124544   58376 cri.go:89] found id: ""
	I0719 15:52:56.124554   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:56.124616   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.129221   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:56.129297   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:56.170151   58376 cri.go:89] found id: ""
	I0719 15:52:56.170177   58376 logs.go:276] 0 containers: []
	W0719 15:52:56.170193   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:56.170212   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:56.170292   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:56.218351   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:56.218377   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.218381   58376 cri.go:89] found id: ""
	I0719 15:52:56.218388   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:56.218437   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.223426   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.227742   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:56.227759   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.271701   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:56.271733   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:56.325333   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:56.325366   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:56.431391   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:56.431423   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:56.485442   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:56.485472   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:56.527493   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:56.527525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.563260   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:56.563289   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.600604   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:56.600635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.656262   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:56.656305   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:57.031511   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:57.031549   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:57.046723   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:57.046748   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:57.083358   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:57.083390   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:57.124108   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:57.124136   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:59.670804   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:59.670831   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.670836   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.670840   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.670844   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.670847   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.670850   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.670855   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.670859   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.670865   58376 system_pods.go:74] duration metric: took 3.808031391s to wait for pod list to return data ...
	I0719 15:52:59.670871   58376 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:59.673231   58376 default_sa.go:45] found service account: "default"
	I0719 15:52:59.673249   58376 default_sa.go:55] duration metric: took 2.372657ms for default service account to be created ...
	I0719 15:52:59.673255   58376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:59.678267   58376 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:59.678289   58376 system_pods.go:89] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.678296   58376 system_pods.go:89] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.678303   58376 system_pods.go:89] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.678310   58376 system_pods.go:89] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.678315   58376 system_pods.go:89] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.678322   58376 system_pods.go:89] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.678331   58376 system_pods.go:89] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.678341   58376 system_pods.go:89] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.678352   58376 system_pods.go:126] duration metric: took 5.090968ms to wait for k8s-apps to be running ...
	I0719 15:52:59.678362   58376 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:59.678411   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:59.695116   58376 system_svc.go:56] duration metric: took 16.750228ms WaitForService to wait for kubelet
	I0719 15:52:59.695139   58376 kubeadm.go:582] duration metric: took 4m23.806469478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:59.695163   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:59.697573   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:59.697592   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:59.697602   58376 node_conditions.go:105] duration metric: took 2.433643ms to run NodePressure ...
	I0719 15:52:59.697612   58376 start.go:241] waiting for startup goroutines ...
	I0719 15:52:59.697618   58376 start.go:246] waiting for cluster config update ...
	I0719 15:52:59.697629   58376 start.go:255] writing updated cluster config ...
	I0719 15:52:59.697907   58376 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:59.744965   58376 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:59.746888   58376 out.go:177] * Done! kubectl is now configured to use "embed-certs-817144" cluster and "default" namespace by default
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.091480376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404905091457590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7049c954-0e5c-46cf-83b4-1cca64d6cb4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.092025805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a734db1-ad24-468c-8d66-f85fa9df58d8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.092086274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a734db1-ad24-468c-8d66-f85fa9df58d8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.092279757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a734db1-ad24-468c-8d66-f85fa9df58d8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.139736631Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5623cfd1-0d00-48f6-a75f-99ef3579aee0 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.139828555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5623cfd1-0d00-48f6-a75f-99ef3579aee0 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.141052621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60a3cfa8-837c-465b-83f8-ae4e5f7ac1fe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.141597124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404905141566747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60a3cfa8-837c-465b-83f8-ae4e5f7ac1fe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.142411464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70272bfe-8a7d-49bd-a2f1-27dfa8899a4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.142477429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70272bfe-8a7d-49bd-a2f1-27dfa8899a4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.142702802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70272bfe-8a7d-49bd-a2f1-27dfa8899a4b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.181381840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb508835-dd8b-4362-8b2e-61ab30750391 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.181516463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb508835-dd8b-4362-8b2e-61ab30750391 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.182960339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16e1c99f-c87c-4435-9c65-822a7c7458fb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.183431764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404905183405985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16e1c99f-c87c-4435-9c65-822a7c7458fb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.184082266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ccae171-57f1-4bf3-95a9-8fe2c3d4d080 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.184182124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ccae171-57f1-4bf3-95a9-8fe2c3d4d080 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.184539196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ccae171-57f1-4bf3-95a9-8fe2c3d4d080 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.222566429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ac06d3e-ed14-4817-aa53-ce9130600bf8 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.222644286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ac06d3e-ed14-4817-aa53-ce9130600bf8 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.224101999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7f17d2b-5b29-4a93-877d-37cf2fa4846b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.224793540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404905224760283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7f17d2b-5b29-4a93-877d-37cf2fa4846b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.225589969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a48baefb-30c4-4b5a-b64a-f8da97e345be name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.225665517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a48baefb-30c4-4b5a-b64a-f8da97e345be name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:01:45 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:01:45.225998204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a48baefb-30c4-4b5a-b64a-f8da97e345be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85352e7e71d12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   1aa11435d4620       storage-provisioner
	3133206986d52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6cec0feb73359       busybox
	001c96d3b9669       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   07b01e0804302       coredns-7db6d8ff4d-z7865
	5a58e1c6658a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   1aa11435d4620       storage-provisioner
	6d295bc6e6fb8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   e574b4ae053d9       kube-proxy-r7b2z
	1f566fdead149       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   448550de9f91f       kube-scheduler-default-k8s-diff-port-601445
	c693018988910       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   ce332af1c8756       kube-controller-manager-default-k8s-diff-port-601445
	60e7b95877d59       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   510612ad4f1ca       etcd-default-k8s-diff-port-601445
	65610b0e92d14       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   038bb23c12bf5       kube-apiserver-default-k8s-diff-port-601445
	
	
	==> coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51615 - 3885 "HINFO IN 6928262908906125533.6899998174746735126. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015049395s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-601445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-601445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=default-k8s-diff-port-601445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_41_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:41:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-601445
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 16:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:58:54 +0000   Fri, 19 Jul 2024 15:41:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:58:54 +0000   Fri, 19 Jul 2024 15:41:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:58:54 +0000   Fri, 19 Jul 2024 15:41:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:58:54 +0000   Fri, 19 Jul 2024 15:48:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.144
	  Hostname:    default-k8s-diff-port-601445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c28d45a5c4b438483c32c75a35bff56
	  System UUID:                0c28d45a-5c4b-4384-83c3-2c75a35bff56
	  Boot ID:                    4183ade3-b8bd-4f96-98e9-4b60579e710a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-z7865                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-601445                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-601445             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-601445    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-r7b2z                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-601445             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-h7hgv                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-601445 event: Registered Node default-k8s-diff-port-601445 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-601445 event: Registered Node default-k8s-diff-port-601445 in Controller
	
	
	==> dmesg <==
	[Jul19 15:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053199] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049149] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.916188] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.425066] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.593967] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.063637] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061172] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Jul19 15:48] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.147635] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.289072] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.498152] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.065072] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.075217] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +4.612178] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.415316] systemd-fstab-generator[1532]: Ignoring "noauto" option for root device
	[  +5.286644] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.795950] kauditd_printk_skb: 13 callbacks suppressed
	[ +15.405901] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] <==
	{"level":"warn","ts":"2024-07-19T15:48:28.616552Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"359.259034ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6500542316625958532 > lease_revoke:<id:5a3690cbade2952a>","response":"size:28"}
	{"level":"info","ts":"2024-07-19T15:48:28.616661Z","caller":"traceutil/trace.go:171","msg":"trace[540730726] linearizableReadLoop","detail":"{readStateIndex:627; appliedIndex:626; }","duration":"410.437291ms","start":"2024-07-19T15:48:28.206205Z","end":"2024-07-19T15:48:28.616643Z","steps":["trace[540730726] 'read index received'  (duration: 50.809303ms)","trace[540730726] 'applied index is now lower than readState.Index'  (duration: 359.626909ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T15:48:28.616826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.592887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-z7865\" ","response":"range_response_count:1 size:4832"}
	{"level":"info","ts":"2024-07-19T15:48:28.616845Z","caller":"traceutil/trace.go:171","msg":"trace[1998011317] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-z7865; range_end:; response_count:1; response_revision:584; }","duration":"410.658002ms","start":"2024-07-19T15:48:28.206181Z","end":"2024-07-19T15:48:28.616839Z","steps":["trace[1998011317] 'agreement among raft nodes before linearized reading'  (duration: 410.530547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:48:28.616864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T15:48:28.206168Z","time spent":"410.691973ms","remote":"127.0.0.1:60388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4855,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-z7865\" "}
	{"level":"info","ts":"2024-07-19T15:48:28.782731Z","caller":"traceutil/trace.go:171","msg":"trace[1867506632] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:627; }","duration":"166.022948ms","start":"2024-07-19T15:48:28.616686Z","end":"2024-07-19T15:48:28.782709Z","steps":["trace[1867506632] 'read index received'  (duration: 165.821748ms)","trace[1867506632] 'applied index is now lower than readState.Index'  (duration: 200.454µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T15:48:28.782816Z","caller":"traceutil/trace.go:171","msg":"trace[479745] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"293.299938ms","start":"2024-07-19T15:48:28.48951Z","end":"2024-07-19T15:48:28.782809Z","steps":["trace[479745] 'process raft request'  (duration: 293.068034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:48:28.783144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.178494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T15:48:28.78321Z","caller":"traceutil/trace.go:171","msg":"trace[1444057916] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:585; }","duration":"460.268821ms","start":"2024-07-19T15:48:28.322931Z","end":"2024-07-19T15:48:28.7832Z","steps":["trace[1444057916] 'agreement among raft nodes before linearized reading'  (duration: 460.177209ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:48:28.783236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T15:48:28.322876Z","time spent":"460.35391ms","remote":"127.0.0.1:60206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-19T15:48:28.783145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.634571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-601445\" ","response":"range_response_count:1 size:5536"}
	{"level":"info","ts":"2024-07-19T15:48:28.783968Z","caller":"traceutil/trace.go:171","msg":"trace[417938923] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-601445; range_end:; response_count:1; response_revision:585; }","duration":"163.486255ms","start":"2024-07-19T15:48:28.62047Z","end":"2024-07-19T15:48:28.783956Z","steps":["trace[417938923] 'agreement among raft nodes before linearized reading'  (duration: 162.543099ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:48:29.397305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.080498ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6500542316625958537 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/busybox.17e3a7ea90e7efcc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/busybox.17e3a7ea90e7efcc\" value_size:663 lease:6500542316625958198 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T15:48:29.397474Z","caller":"traceutil/trace.go:171","msg":"trace[1316621286] linearizableReadLoop","detail":"{readStateIndex:629; appliedIndex:628; }","duration":"609.342087ms","start":"2024-07-19T15:48:28.788118Z","end":"2024-07-19T15:48:29.39746Z","steps":["trace[1316621286] 'read index received'  (duration: 337.998359ms)","trace[1316621286] 'applied index is now lower than readState.Index'  (duration: 271.342632ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T15:48:29.397605Z","caller":"traceutil/trace.go:171","msg":"trace[206734062] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"611.768889ms","start":"2024-07-19T15:48:28.785826Z","end":"2024-07-19T15:48:29.397594Z","steps":["trace[206734062] 'process raft request'  (duration: 340.708601ms)","trace[206734062] 'compare'  (duration: 269.406804ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T15:48:29.397686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T15:48:28.785806Z","time spent":"611.846569ms","remote":"127.0.0.1:60284","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":730,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/busybox.17e3a7ea90e7efcc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/busybox.17e3a7ea90e7efcc\" value_size:663 lease:6500542316625958198 >> failure:<>"}
	{"level":"warn","ts":"2024-07-19T15:48:29.397793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"496.348965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T15:48:29.397859Z","caller":"traceutil/trace.go:171","msg":"trace[1419440282] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:586; }","duration":"496.446317ms","start":"2024-07-19T15:48:28.9014Z","end":"2024-07-19T15:48:29.397846Z","steps":["trace[1419440282] 'agreement among raft nodes before linearized reading'  (duration: 496.208674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:48:29.397914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T15:48:28.901386Z","time spent":"496.517128ms","remote":"127.0.0.1:60216","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-19T15:48:29.398112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"609.983048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-z7865\" ","response":"range_response_count:1 size:4832"}
	{"level":"info","ts":"2024-07-19T15:48:29.398165Z","caller":"traceutil/trace.go:171","msg":"trace[462497538] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-z7865; range_end:; response_count:1; response_revision:586; }","duration":"610.055204ms","start":"2024-07-19T15:48:28.788099Z","end":"2024-07-19T15:48:29.398154Z","steps":["trace[462497538] 'agreement among raft nodes before linearized reading'  (duration: 609.922653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T15:48:29.398199Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T15:48:28.78809Z","time spent":"610.100782ms","remote":"127.0.0.1:60388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":4855,"request content":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-z7865\" "}
	{"level":"info","ts":"2024-07-19T15:58:10.136987Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":843}
	{"level":"info","ts":"2024-07-19T15:58:10.147725Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":843,"took":"9.946746ms","hash":631337725,"current-db-size-bytes":2727936,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2727936,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-07-19T15:58:10.147831Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":631337725,"revision":843,"compact-revision":-1}
	
	
	==> kernel <==
	 16:01:45 up 14 min,  0 users,  load average: 0.08, 0.10, 0.05
	Linux default-k8s-diff-port-601445 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] <==
	I0719 15:56:12.507435       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:58:11.510553       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:58:11.510668       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 15:58:12.511910       1 handler_proxy.go:93] no RequestInfo found in the context
	W0719 15:58:12.512002       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:58:12.512018       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 15:58:12.512091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0719 15:58:12.512096       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 15:58:12.513294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:59:12.513056       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:59:12.513314       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 15:59:12.513420       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:59:12.513492       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:59:12.513562       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 15:59:12.515275       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:01:12.513851       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:01:12.513934       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:01:12.513945       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:01:12.516361       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:01:12.516488       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:01:12.516516       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] <==
	I0719 15:55:54.586260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:56:23.984716       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:56:24.598587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:56:53.992214       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:56:54.605484       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:57:23.997122       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:57:24.613059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:57:54.003289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:57:54.621723       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:58:24.009079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:58:24.630222       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:58:54.014653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:58:54.637309       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 15:59:23.825659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="428.057µs"
	E0719 15:59:24.020464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:59:24.645570       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 15:59:38.824592       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="86.065µs"
	E0719 15:59:54.026179       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:59:54.655644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:00:24.031779       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:00:24.663553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:00:54.036641       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:00:54.670935       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:01:24.041289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:01:24.678729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] <==
	I0719 15:48:12.395519       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:48:12.405662       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.144"]
	I0719 15:48:12.457549       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:48:12.457690       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:48:12.457731       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:48:12.462761       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:48:12.463074       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:48:12.463142       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:12.466208       1 config.go:192] "Starting service config controller"
	I0719 15:48:12.466750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:48:12.466847       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:48:12.466871       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:48:12.467388       1 config.go:319] "Starting node config controller"
	I0719 15:48:12.468132       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:48:12.567151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:48:12.567429       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:48:12.568294       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] <==
	I0719 15:48:09.514298       1 serving.go:380] Generated self-signed cert in-memory
	W0719 15:48:11.487694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:48:11.487790       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:48:11.487802       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:48:11.487808       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:48:11.539478       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:48:11.539520       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:11.545847       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:48:11.545927       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:48:11.546794       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:48:11.550543       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 15:48:11.646594       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:59:09 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:09.828479     940 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 15:59:09 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:09.828567     940 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 15:59:09 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:09.828907     940 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbh8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-h7hgv_kube-system(9b4cdf2e-e6fc-4d88-99f1-31066805f915): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 19 15:59:09 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:09.828969     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 15:59:23 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:23.808780     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 15:59:38 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:38.808759     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 15:59:51 default-k8s-diff-port-601445 kubelet[940]: E0719 15:59:51.808132     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:00:03 default-k8s-diff-port-601445 kubelet[940]: E0719 16:00:03.809460     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:00:07 default-k8s-diff-port-601445 kubelet[940]: E0719 16:00:07.833122     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:00:07 default-k8s-diff-port-601445 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:00:07 default-k8s-diff-port-601445 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:00:07 default-k8s-diff-port-601445 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:00:07 default-k8s-diff-port-601445 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:00:16 default-k8s-diff-port-601445 kubelet[940]: E0719 16:00:16.808248     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:00:31 default-k8s-diff-port-601445 kubelet[940]: E0719 16:00:31.808613     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:00:44 default-k8s-diff-port-601445 kubelet[940]: E0719 16:00:44.808608     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:00:56 default-k8s-diff-port-601445 kubelet[940]: E0719 16:00:56.808195     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:01:07 default-k8s-diff-port-601445 kubelet[940]: E0719 16:01:07.827586     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:01:07 default-k8s-diff-port-601445 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:01:07 default-k8s-diff-port-601445 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:01:07 default-k8s-diff-port-601445 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:01:07 default-k8s-diff-port-601445 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:01:10 default-k8s-diff-port-601445 kubelet[940]: E0719 16:01:10.809200     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:01:21 default-k8s-diff-port-601445 kubelet[940]: E0719 16:01:21.809181     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:01:32 default-k8s-diff-port-601445 kubelet[940]: E0719 16:01:32.807849     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	
	
	==> storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] <==
	I0719 15:48:12.360500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 15:48:42.363568       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] <==
	I0719 15:48:43.101018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:48:43.112535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:48:43.112627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 15:49:00.513710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 15:49:00.513880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-601445_9cc5af00-1d19-4faa-a45a-a37e0574e41a!
	I0719 15:49:00.515154       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d52cd4ec-cef5-457d-bdf9-faf7f2a7401c", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-601445_9cc5af00-1d19-4faa-a45a-a37e0574e41a became leader
	I0719 15:49:00.614362       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-601445_9cc5af00-1d19-4faa-a45a-a37e0574e41a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-h7hgv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 describe pod metrics-server-569cc877fc-h7hgv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601445 describe pod metrics-server-569cc877fc-h7hgv: exit status 1 (59.050489ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-h7hgv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-601445 describe pod metrics-server-569cc877fc-h7hgv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (545.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0719 15:54:28.744032   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817144 -n embed-certs-817144
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-19 16:02:00.270638965 +0000 UTC m=+6095.066225496
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-817144 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-817144 logs -n 25: (2.107394304s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-127438 -- sudo                         | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-127438                                 | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-574044                           | kubernetes-upgrade-574044    | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:44:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:44:39.385142   59208 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:44:39.385249   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385257   59208 out.go:304] Setting ErrFile to fd 2...
	I0719 15:44:39.385261   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385405   59208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:44:39.385919   59208 out.go:298] Setting JSON to false
	I0719 15:44:39.386767   59208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5225,"bootTime":1721398654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:44:39.386817   59208 start.go:139] virtualization: kvm guest
	I0719 15:44:39.390104   59208 out.go:177] * [default-k8s-diff-port-601445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:44:39.391867   59208 notify.go:220] Checking for updates...
	I0719 15:44:39.391890   59208 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:44:39.393463   59208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:44:39.394883   59208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:44:39.396081   59208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:44:39.397280   59208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:44:39.398540   59208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:44:39.400177   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:44:39.400543   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.400601   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.415749   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0719 15:44:39.416104   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.416644   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.416664   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.416981   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.417206   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.417443   59208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:44:39.417751   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.417793   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.432550   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0719 15:44:39.433003   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.433478   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.433504   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.433836   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.434083   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.467474   59208 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:44:38.674498   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:39.468897   59208 start.go:297] selected driver: kvm2
	I0719 15:44:39.468921   59208 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.469073   59208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:44:39.470083   59208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.470178   59208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:44:39.485232   59208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:44:39.485586   59208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:44:39.485616   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:44:39.485624   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:44:39.485666   59208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.485752   59208 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.487537   59208 out.go:177] * Starting "default-k8s-diff-port-601445" primary control-plane node in "default-k8s-diff-port-601445" cluster
	I0719 15:44:39.488672   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:44:39.488709   59208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:44:39.488718   59208 cache.go:56] Caching tarball of preloaded images
	I0719 15:44:39.488795   59208 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:44:39.488807   59208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:44:39.488895   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:44:39.489065   59208 start.go:360] acquireMachinesLock for default-k8s-diff-port-601445: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:44:41.746585   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:47.826521   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:50.898507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:56.978531   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:00.050437   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:06.130631   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:09.202570   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:15.282481   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:18.354537   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:24.434488   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:27.506515   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:33.586522   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:36.658503   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:42.738573   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:45.810538   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:51.890547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:54.962507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:01.042509   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:04.114621   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:10.194576   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:13.266450   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:19.346524   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:22.418506   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:28.498553   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:31.570507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:37.650477   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:40.722569   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:46.802495   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:49.874579   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:55.954547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:59.026454   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:47:02.030619   58417 start.go:364] duration metric: took 4m36.939495617s to acquireMachinesLock for "no-preload-382231"
	I0719 15:47:02.030679   58417 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:02.030685   58417 fix.go:54] fixHost starting: 
	I0719 15:47:02.031010   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:02.031039   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:02.046256   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0719 15:47:02.046682   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:02.047151   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:47:02.047178   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:02.047573   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:02.047818   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:02.048023   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:47:02.049619   58417 fix.go:112] recreateIfNeeded on no-preload-382231: state=Stopped err=<nil>
	I0719 15:47:02.049641   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	W0719 15:47:02.049785   58417 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:02.051800   58417 out.go:177] * Restarting existing kvm2 VM for "no-preload-382231" ...
	I0719 15:47:02.028090   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:02.028137   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028489   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:47:02.028517   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:47:02.030488   58376 machine.go:97] duration metric: took 4m37.428160404s to provisionDockerMachine
	I0719 15:47:02.030529   58376 fix.go:56] duration metric: took 4m37.450063037s for fixHost
	I0719 15:47:02.030535   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 4m37.450081944s
	W0719 15:47:02.030559   58376 start.go:714] error starting host: provision: host is not running
	W0719 15:47:02.030673   58376 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 15:47:02.030686   58376 start.go:729] Will try again in 5 seconds ...
	I0719 15:47:02.053160   58417 main.go:141] libmachine: (no-preload-382231) Calling .Start
	I0719 15:47:02.053325   58417 main.go:141] libmachine: (no-preload-382231) Ensuring networks are active...
	I0719 15:47:02.054289   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network default is active
	I0719 15:47:02.054786   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network mk-no-preload-382231 is active
	I0719 15:47:02.055259   58417 main.go:141] libmachine: (no-preload-382231) Getting domain xml...
	I0719 15:47:02.056202   58417 main.go:141] libmachine: (no-preload-382231) Creating domain...
	I0719 15:47:03.270495   58417 main.go:141] libmachine: (no-preload-382231) Waiting to get IP...
	I0719 15:47:03.271595   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.272074   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.272151   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.272057   59713 retry.go:31] will retry after 239.502065ms: waiting for machine to come up
	I0719 15:47:03.513745   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.514224   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.514264   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.514191   59713 retry.go:31] will retry after 315.982717ms: waiting for machine to come up
	I0719 15:47:03.831739   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.832155   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.832187   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.832111   59713 retry.go:31] will retry after 468.820113ms: waiting for machine to come up
	I0719 15:47:04.302865   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.303273   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.303306   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.303236   59713 retry.go:31] will retry after 526.764683ms: waiting for machine to come up
	I0719 15:47:04.832048   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.832551   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.832583   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.832504   59713 retry.go:31] will retry after 754.533212ms: waiting for machine to come up
	I0719 15:47:07.032310   58376 start.go:360] acquireMachinesLock for embed-certs-817144: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:05.588374   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:05.588834   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:05.588862   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:05.588785   59713 retry.go:31] will retry after 757.18401ms: waiting for machine to come up
	I0719 15:47:06.347691   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:06.348135   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:06.348164   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:06.348053   59713 retry.go:31] will retry after 1.097437331s: waiting for machine to come up
	I0719 15:47:07.446836   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:07.447199   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:07.447219   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:07.447158   59713 retry.go:31] will retry after 1.448513766s: waiting for machine to come up
	I0719 15:47:08.897886   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:08.898289   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:08.898317   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:08.898216   59713 retry.go:31] will retry after 1.583843671s: waiting for machine to come up
	I0719 15:47:10.483476   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:10.483934   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:10.483963   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:10.483864   59713 retry.go:31] will retry after 1.86995909s: waiting for machine to come up
	I0719 15:47:12.355401   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:12.355802   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:12.355827   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:12.355762   59713 retry.go:31] will retry after 2.577908462s: waiting for machine to come up
	I0719 15:47:14.934837   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:14.935263   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:14.935285   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:14.935225   59713 retry.go:31] will retry after 3.158958575s: waiting for machine to come up
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:18.095456   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095912   58417 main.go:141] libmachine: (no-preload-382231) Found IP for machine: 192.168.39.227
	I0719 15:47:18.095936   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has current primary IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095942   58417 main.go:141] libmachine: (no-preload-382231) Reserving static IP address...
	I0719 15:47:18.096317   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.096357   58417 main.go:141] libmachine: (no-preload-382231) Reserved static IP address: 192.168.39.227
	I0719 15:47:18.096376   58417 main.go:141] libmachine: (no-preload-382231) DBG | skip adding static IP to network mk-no-preload-382231 - found existing host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"}
	I0719 15:47:18.096392   58417 main.go:141] libmachine: (no-preload-382231) DBG | Getting to WaitForSSH function...
	I0719 15:47:18.096407   58417 main.go:141] libmachine: (no-preload-382231) Waiting for SSH to be available...
	I0719 15:47:18.098619   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.098978   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.099008   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.099122   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH client type: external
	I0719 15:47:18.099151   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa (-rw-------)
	I0719 15:47:18.099183   58417 main.go:141] libmachine: (no-preload-382231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:18.099196   58417 main.go:141] libmachine: (no-preload-382231) DBG | About to run SSH command:
	I0719 15:47:18.099210   58417 main.go:141] libmachine: (no-preload-382231) DBG | exit 0
	I0719 15:47:18.222285   58417 main.go:141] libmachine: (no-preload-382231) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:18.222607   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetConfigRaw
	I0719 15:47:18.223181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.225751   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226062   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.226105   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226327   58417 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:47:18.226504   58417 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:18.226520   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:18.226684   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.228592   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.228936   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.228960   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.229094   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.229246   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229398   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229516   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.229663   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.229887   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.229901   58417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:18.330731   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:18.330764   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331053   58417 buildroot.go:166] provisioning hostname "no-preload-382231"
	I0719 15:47:18.331084   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331282   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.333905   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334212   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.334270   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334331   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.334510   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334705   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334850   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.335030   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.335216   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.335230   58417 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-382231 && echo "no-preload-382231" | sudo tee /etc/hostname
	I0719 15:47:18.453128   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-382231
	
	I0719 15:47:18.453151   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.455964   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456323   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.456349   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456549   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.456822   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457010   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457158   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.457300   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.457535   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.457561   58417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-382231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-382231/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-382231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:18.568852   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:18.568878   58417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:18.568902   58417 buildroot.go:174] setting up certificates
	I0719 15:47:18.568915   58417 provision.go:84] configureAuth start
	I0719 15:47:18.568924   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.569240   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.571473   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.571757   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.571783   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.572029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.573941   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574213   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.574247   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574393   58417 provision.go:143] copyHostCerts
	I0719 15:47:18.574455   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:18.574465   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:18.574528   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:18.574615   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:18.574622   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:18.574645   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:18.574696   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:18.574703   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:18.574722   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:18.574768   58417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.no-preload-382231 san=[127.0.0.1 192.168.39.227 localhost minikube no-preload-382231]
	I0719 15:47:18.636408   58417 provision.go:177] copyRemoteCerts
	I0719 15:47:18.636458   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:18.636477   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.638719   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639021   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.639054   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639191   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.639379   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.639532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.639795   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:18.720305   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:18.742906   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:18.764937   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:47:18.787183   58417 provision.go:87] duration metric: took 218.257504ms to configureAuth
	I0719 15:47:18.787205   58417 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:18.787355   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:47:18.787418   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.789685   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.789992   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.790017   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.790181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.790366   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790632   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.790770   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.790929   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.790943   58417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:19.053326   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:19.053350   58417 machine.go:97] duration metric: took 826.83404ms to provisionDockerMachine
	I0719 15:47:19.053364   58417 start.go:293] postStartSetup for "no-preload-382231" (driver="kvm2")
	I0719 15:47:19.053379   58417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:19.053409   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.053733   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:19.053755   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.056355   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056709   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.056737   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056884   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.057037   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.057172   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.057370   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.136785   58417 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:19.140756   58417 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:19.140777   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:19.140847   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:19.140941   58417 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:19.141044   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:19.150247   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:19.172800   58417 start.go:296] duration metric: took 119.424607ms for postStartSetup
	I0719 15:47:19.172832   58417 fix.go:56] duration metric: took 17.142146552s for fixHost
	I0719 15:47:19.172849   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.175427   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.175816   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.175851   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.176027   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.176281   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176636   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.176892   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:19.177051   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:19.177061   58417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:19.278564   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404039.251890495
	
	I0719 15:47:19.278594   58417 fix.go:216] guest clock: 1721404039.251890495
	I0719 15:47:19.278605   58417 fix.go:229] Guest: 2024-07-19 15:47:19.251890495 +0000 UTC Remote: 2024-07-19 15:47:19.172835531 +0000 UTC m=+294.220034318 (delta=79.054964ms)
	I0719 15:47:19.278651   58417 fix.go:200] guest clock delta is within tolerance: 79.054964ms
	I0719 15:47:19.278659   58417 start.go:83] releasing machines lock for "no-preload-382231", held for 17.247997118s
	I0719 15:47:19.278692   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.279029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:19.281674   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282034   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.282063   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282221   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282750   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282935   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282991   58417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:19.283061   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.283095   58417 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:19.283116   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.285509   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285805   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.285828   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285846   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285959   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286182   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286276   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.286300   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.286468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286481   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286632   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.286672   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286806   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286935   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.363444   58417 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:19.387514   58417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:19.545902   58417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:19.551747   58417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:19.551812   58417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:19.568563   58417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:19.568589   58417 start.go:495] detecting cgroup driver to use...
	I0719 15:47:19.568654   58417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:19.589440   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:19.604889   58417 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:19.604962   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:19.624114   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:19.638265   58417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:19.752880   58417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:19.900078   58417 docker.go:233] disabling docker service ...
	I0719 15:47:19.900132   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:19.914990   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:19.928976   58417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:20.079363   58417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:20.203629   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:20.218502   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:20.237028   58417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 15:47:20.237089   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.248514   58417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:20.248597   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.260162   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.272166   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.283341   58417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:20.294687   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.305495   58417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.328024   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.339666   58417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:20.349271   58417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:20.349314   58417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:20.364130   58417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:20.376267   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:20.501259   58417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:20.643763   58417 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:20.643828   58417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:20.648525   58417 start.go:563] Will wait 60s for crictl version
	I0719 15:47:20.648586   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:20.652256   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:20.689386   58417 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:20.689468   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.720662   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.751393   58417 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:20.752939   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:20.755996   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756367   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:20.756395   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756723   58417 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:20.760962   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:20.776973   58417 kubeadm.go:883] updating cluster {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:20.777084   58417 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 15:47:20.777120   58417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:20.814520   58417 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 15:47:20.814547   58417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:20.814631   58417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:20.814650   58417 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.814657   58417 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.814682   58417 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.814637   58417 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.814736   58417 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.814808   58417 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.814742   58417 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.816435   58417 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.816446   58417 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.816513   58417 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.816535   58417 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816559   58417 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.816719   58417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:21.003845   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 15:47:21.028954   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.039628   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.041391   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.065499   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.084966   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.142812   58417 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 15:47:21.142873   58417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 15:47:21.142905   58417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.142921   58417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 15:47:21.142939   58417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.142962   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142877   58417 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.143025   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142983   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.160141   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.182875   58417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 15:47:21.182918   58417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.182945   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.182958   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.182957   58417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 15:47:21.182992   58417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.183029   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.183044   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.183064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.272688   58417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 15:47:21.272724   58417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.272768   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.272783   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272825   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.272876   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272906   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 15:47:21.272931   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.272971   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:21.272997   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 15:47:21.273064   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:21.326354   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326356   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.326441   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 15:47:21.326457   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326459   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326492   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326497   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.326529   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 15:47:21.326535   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 15:47:21.326633   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.363401   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:21.363496   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:22.268448   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.010876   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.684346805s)
	I0719 15:47:24.010910   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 15:47:24.010920   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.684439864s)
	I0719 15:47:24.010952   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 15:47:24.010930   58417 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.010993   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.684342001s)
	I0719 15:47:24.011014   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.011019   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011046   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.647533327s)
	I0719 15:47:24.011066   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011098   58417 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742620594s)
	I0719 15:47:24.011137   58417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 15:47:24.011170   58417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.011204   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:27.292973   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.281931356s)
	I0719 15:47:27.293008   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 15:47:27.293001   58417 ssh_runner.go:235] Completed: which crictl: (3.281778521s)
	I0719 15:47:27.293043   58417 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:27.293064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:27.293086   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:29.269642   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976526914s)
	I0719 15:47:29.269676   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 15:47:29.269698   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269641   58417 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.97655096s)
	I0719 15:47:29.269748   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269773   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 15:47:29.269875   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:31.242199   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.972421845s)
	I0719 15:47:31.242257   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 15:47:31.242273   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972374564s)
	I0719 15:47:31.242283   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:31.242306   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 15:47:31.242334   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:32.592736   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.350379333s)
	I0719 15:47:32.592762   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 15:47:32.592782   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:32.592817   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:34.547084   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954243196s)
	I0719 15:47:34.547122   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 15:47:34.547155   58417 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:34.547231   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.759098   59208 start.go:364] duration metric: took 2m59.27000152s to acquireMachinesLock for "default-k8s-diff-port-601445"
	I0719 15:47:38.759165   59208 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:38.759176   59208 fix.go:54] fixHost starting: 
	I0719 15:47:38.759633   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:38.759685   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:38.779587   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0719 15:47:38.779979   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:38.780480   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:47:38.780497   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:38.780888   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:38.781129   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:38.781260   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:47:38.782786   59208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601445: state=Stopped err=<nil>
	I0719 15:47:38.782860   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	W0719 15:47:38.783056   59208 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:38.785037   59208 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-601445" ...
	I0719 15:47:38.786497   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Start
	I0719 15:47:38.786691   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring networks are active...
	I0719 15:47:38.787520   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network default is active
	I0719 15:47:38.787819   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network mk-default-k8s-diff-port-601445 is active
	I0719 15:47:38.788418   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Getting domain xml...
	I0719 15:47:38.789173   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Creating domain...
	I0719 15:47:35.191148   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 15:47:35.191193   58417 cache_images.go:123] Successfully loaded all cached images
	I0719 15:47:35.191198   58417 cache_images.go:92] duration metric: took 14.376640053s to LoadCachedImages
	I0719 15:47:35.191209   58417 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0-beta.0 crio true true} ...
	I0719 15:47:35.191329   58417 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-382231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:35.191424   58417 ssh_runner.go:195] Run: crio config
	I0719 15:47:35.236248   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:35.236276   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:35.236288   58417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:35.236309   58417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-382231 NodeName:no-preload-382231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:47:35.236464   58417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-382231"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:35.236525   58417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 15:47:35.247524   58417 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:35.247611   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:35.257583   58417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 15:47:35.275057   58417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 15:47:35.291468   58417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 15:47:35.308021   58417 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:35.312121   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:35.324449   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:35.451149   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:35.477844   58417 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231 for IP: 192.168.39.227
	I0719 15:47:35.477868   58417 certs.go:194] generating shared ca certs ...
	I0719 15:47:35.477887   58417 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:35.478043   58417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:35.478093   58417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:35.478103   58417 certs.go:256] generating profile certs ...
	I0719 15:47:35.478174   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.key
	I0719 15:47:35.478301   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key.46f9a235
	I0719 15:47:35.478339   58417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key
	I0719 15:47:35.478482   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:35.478520   58417 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:35.478530   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:35.478549   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:35.478569   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:35.478591   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:35.478628   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:35.479291   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:35.523106   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:35.546934   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:35.585616   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:35.617030   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:47:35.641486   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:47:35.680051   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:35.703679   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:47:35.728088   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:35.751219   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:35.774149   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:35.796985   58417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:35.813795   58417 ssh_runner.go:195] Run: openssl version
	I0719 15:47:35.819568   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:35.830350   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834792   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834847   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.840531   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:35.851584   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:35.862655   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867139   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867199   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.872916   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:35.883986   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:35.894795   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899001   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899049   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.904496   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:35.915180   58417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:35.919395   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:35.926075   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:35.931870   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:35.938089   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:35.944079   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:35.950449   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:35.956291   58417 kubeadm.go:392] StartCluster: {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:35.956396   58417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:35.956452   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:35.993976   58417 cri.go:89] found id: ""
	I0719 15:47:35.994047   58417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:36.004507   58417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:36.004532   58417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:36.004579   58417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:36.014644   58417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:36.015628   58417 kubeconfig.go:125] found "no-preload-382231" server: "https://192.168.39.227:8443"
	I0719 15:47:36.017618   58417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:36.027252   58417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0719 15:47:36.027281   58417 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:36.027292   58417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:36.027350   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:36.066863   58417 cri.go:89] found id: ""
	I0719 15:47:36.066934   58417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:36.082971   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:36.092782   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:36.092802   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:36.092841   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:36.101945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:36.101998   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:36.111368   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:36.120402   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:36.120447   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:36.130124   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.138945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:36.138990   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.148176   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:36.157008   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:36.157060   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:36.166273   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:36.176032   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:36.291855   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.285472   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.476541   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.547807   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.652551   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:37.652649   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.153088   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.653690   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.718826   58417 api_server.go:72] duration metric: took 1.066275053s to wait for apiserver process to appear ...
	I0719 15:47:38.718858   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:47:38.718891   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:41.984204   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:41.984237   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:41.984255   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.031024   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:42.031055   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:42.219815   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.256851   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.256888   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:42.719015   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.756668   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.756705   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.219173   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.255610   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:43.255645   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.719116   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.725453   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:47:43.739070   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:47:43.739108   58417 api_server.go:131] duration metric: took 5.020238689s to wait for apiserver health ...
	I0719 15:47:43.739119   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:43.739128   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:43.741458   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:47:40.069048   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting to get IP...
	I0719 15:47:40.069866   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070409   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070480   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.070379   59996 retry.go:31] will retry after 299.168281ms: waiting for machine to come up
	I0719 15:47:40.370939   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371381   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.371340   59996 retry.go:31] will retry after 388.345842ms: waiting for machine to come up
	I0719 15:47:40.761301   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762861   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762889   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.762797   59996 retry.go:31] will retry after 305.39596ms: waiting for machine to come up
	I0719 15:47:41.070215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070791   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070823   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.070746   59996 retry.go:31] will retry after 452.50233ms: waiting for machine to come up
	I0719 15:47:41.525465   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.525997   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.526019   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.525920   59996 retry.go:31] will retry after 686.050268ms: waiting for machine to come up
	I0719 15:47:42.214012   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214513   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214545   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:42.214465   59996 retry.go:31] will retry after 867.815689ms: waiting for machine to come up
	I0719 15:47:43.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084240   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:43.084198   59996 retry.go:31] will retry after 1.006018507s: waiting for machine to come up
	I0719 15:47:44.092571   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093050   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:44.092992   59996 retry.go:31] will retry after 961.604699ms: waiting for machine to come up
	I0719 15:47:43.743125   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:47:43.780558   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:47:43.825123   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:47:43.849564   58417 system_pods.go:59] 8 kube-system pods found
	I0719 15:47:43.849608   58417 system_pods.go:61] "coredns-5cfdc65f69-9p4dr" [b6744bc9-b683-4f7e-b506-a95eb58ac308] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:47:43.849620   58417 system_pods.go:61] "etcd-no-preload-382231" [1f2704ae-84a0-4636-9826-f6bb5d2cb8b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:47:43.849632   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [e4ae90fb-9024-4420-9249-6f936ff43894] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:47:43.849643   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [ceb3538d-a6b9-4135-b044-b139003baf35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:47:43.849650   58417 system_pods.go:61] "kube-proxy-z2z9r" [fdc0eb8f-2884-436b-ba1e-4c71107f756c] Running
	I0719 15:47:43.849657   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [5ae3221b-7186-4dbe-9b1b-fb4c8c239c62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:47:43.849677   58417 system_pods.go:61] "metrics-server-78fcd8795b-zwr8g" [4d4de9aa-89f2-4cf4-85c2-26df25bd82c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:47:43.849687   58417 system_pods.go:61] "storage-provisioner" [ab5ce17f-a0da-4ab7-803e-245ba4363d09] Running
	I0719 15:47:43.849696   58417 system_pods.go:74] duration metric: took 24.54438ms to wait for pod list to return data ...
	I0719 15:47:43.849709   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:47:43.864512   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:47:43.864636   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:47:43.864684   58417 node_conditions.go:105] duration metric: took 14.967708ms to run NodePressure ...
	I0719 15:47:43.864727   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:44.524399   58417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531924   58417 kubeadm.go:739] kubelet initialised
	I0719 15:47:44.531944   58417 kubeadm.go:740] duration metric: took 7.516197ms waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531952   58417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:47:44.538016   58417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:45.055856   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056318   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056347   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:45.056263   59996 retry.go:31] will retry after 1.300059023s: waiting for machine to come up
	I0719 15:47:46.357875   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358379   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358407   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:46.358331   59996 retry.go:31] will retry after 2.269558328s: waiting for machine to come up
	I0719 15:47:48.630965   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631641   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631674   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:48.631546   59996 retry.go:31] will retry after 2.829487546s: waiting for machine to come up
	I0719 15:47:47.449778   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:48.045481   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:48.045508   58417 pod_ready.go:81] duration metric: took 3.507466621s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.045521   58417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.463569   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464003   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:51.463968   59996 retry.go:31] will retry after 2.917804786s: waiting for machine to come up
	I0719 15:47:54.383261   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383967   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383993   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:54.383924   59996 retry.go:31] will retry after 4.044917947s: waiting for machine to come up
	I0719 15:47:50.052168   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:51.052114   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:51.052135   58417 pod_ready.go:81] duration metric: took 3.006607122s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:51.052144   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059540   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:52.059563   58417 pod_ready.go:81] duration metric: took 1.007411773s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059576   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.066338   58417 pod_ready.go:102] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:54.567056   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.567076   58417 pod_ready.go:81] duration metric: took 2.507493559s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.567085   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571655   58417 pod_ready.go:92] pod "kube-proxy-z2z9r" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.571672   58417 pod_ready.go:81] duration metric: took 4.581191ms for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571680   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.575983   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.576005   58417 pod_ready.go:81] duration metric: took 4.315788ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.576017   58417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.432420   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432945   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Found IP for machine: 192.168.61.144
	I0719 15:47:58.432976   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has current primary IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432988   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserving static IP address...
	I0719 15:47:58.433361   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.433395   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | skip adding static IP to network mk-default-k8s-diff-port-601445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"}
	I0719 15:47:58.433412   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserved static IP address: 192.168.61.144
	I0719 15:47:58.433430   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for SSH to be available...
	I0719 15:47:58.433442   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Getting to WaitForSSH function...
	I0719 15:47:58.435448   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435770   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.435807   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435868   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH client type: external
	I0719 15:47:58.435930   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa (-rw-------)
	I0719 15:47:58.435973   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:58.435992   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | About to run SSH command:
	I0719 15:47:58.436002   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | exit 0
	I0719 15:47:58.562187   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:58.562564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetConfigRaw
	I0719 15:47:58.563233   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.565694   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566042   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.566066   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566301   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:47:58.566469   59208 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:58.566489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:58.566684   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.569109   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569485   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.569512   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569594   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.569763   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.569912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.570022   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.570167   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.570398   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.570412   59208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:58.675164   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:58.675217   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675455   59208 buildroot.go:166] provisioning hostname "default-k8s-diff-port-601445"
	I0719 15:47:58.675487   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.678103   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678522   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.678564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678721   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.678908   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679074   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679198   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.679345   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.679516   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.679531   59208 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-601445 && echo "default-k8s-diff-port-601445" | sudo tee /etc/hostname
	I0719 15:47:58.802305   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-601445
	
	I0719 15:47:58.802336   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.805215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805582   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.805613   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805796   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.805981   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806139   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806322   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.806517   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.806689   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.806706   59208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-601445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-601445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-601445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:58.919959   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:58.919985   59208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:58.920019   59208 buildroot.go:174] setting up certificates
	I0719 15:47:58.920031   59208 provision.go:84] configureAuth start
	I0719 15:47:58.920041   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.920283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.922837   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923193   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.923225   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923413   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.925832   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926128   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.926156   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926297   59208 provision.go:143] copyHostCerts
	I0719 15:47:58.926360   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:58.926374   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:58.926425   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:58.926512   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:58.926520   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:58.926543   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:58.926600   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:58.926609   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:58.926630   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:58.926682   59208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-601445 san=[127.0.0.1 192.168.61.144 default-k8s-diff-port-601445 localhost minikube]
	I0719 15:47:59.080911   59208 provision.go:177] copyRemoteCerts
	I0719 15:47:59.080966   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:59.080990   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084029   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.084059   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084219   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.084411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.084531   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.084674   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.172754   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:59.198872   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 15:47:59.222898   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:47:59.246017   59208 provision.go:87] duration metric: took 325.975105ms to configureAuth
	I0719 15:47:59.246037   59208 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:59.246215   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:47:59.246312   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.248757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249079   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.249111   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249354   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.249526   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249679   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249779   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.249924   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.250142   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.250161   59208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:59.743101   58376 start.go:364] duration metric: took 52.710718223s to acquireMachinesLock for "embed-certs-817144"
	I0719 15:47:59.743169   58376 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:59.743177   58376 fix.go:54] fixHost starting: 
	I0719 15:47:59.743553   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:59.743591   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:59.760837   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0719 15:47:59.761216   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:59.761734   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:47:59.761754   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:59.762080   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:59.762291   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:47:59.762504   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:47:59.764044   58376 fix.go:112] recreateIfNeeded on embed-certs-817144: state=Stopped err=<nil>
	I0719 15:47:59.764067   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	W0719 15:47:59.764217   58376 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:59.766063   58376 out.go:177] * Restarting existing kvm2 VM for "embed-certs-817144" ...
	I0719 15:47:56.582753   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:58.583049   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:59.508289   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:59.508327   59208 machine.go:97] duration metric: took 941.842272ms to provisionDockerMachine
	I0719 15:47:59.508343   59208 start.go:293] postStartSetup for "default-k8s-diff-port-601445" (driver="kvm2")
	I0719 15:47:59.508359   59208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:59.508383   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.508687   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:59.508720   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.511449   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.511887   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.511911   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.512095   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.512275   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.512437   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.512580   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.596683   59208 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:59.600761   59208 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:59.600782   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:59.600841   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:59.600911   59208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:59.600996   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:59.609867   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:59.633767   59208 start.go:296] duration metric: took 125.408568ms for postStartSetup
	I0719 15:47:59.633803   59208 fix.go:56] duration metric: took 20.874627736s for fixHost
	I0719 15:47:59.633825   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.636600   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.636944   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.636977   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.637121   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.637328   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637495   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637640   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.637811   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.637989   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.637999   59208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:59.742929   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404079.728807147
	
	I0719 15:47:59.742957   59208 fix.go:216] guest clock: 1721404079.728807147
	I0719 15:47:59.742967   59208 fix.go:229] Guest: 2024-07-19 15:47:59.728807147 +0000 UTC Remote: 2024-07-19 15:47:59.633807395 +0000 UTC m=+200.280673126 (delta=94.999752ms)
	I0719 15:47:59.743008   59208 fix.go:200] guest clock delta is within tolerance: 94.999752ms
	I0719 15:47:59.743013   59208 start.go:83] releasing machines lock for "default-k8s-diff-port-601445", held for 20.983876369s
	I0719 15:47:59.743040   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.743262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:59.746145   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746501   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.746534   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746662   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747297   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747461   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747553   59208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:59.747603   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.747714   59208 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:59.747738   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.750268   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750583   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750751   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750916   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750932   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.750942   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.751127   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751170   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.751269   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751353   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751421   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.751489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751646   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.834888   59208 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:59.859285   59208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:00.009771   59208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:00.015906   59208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:00.015973   59208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:00.032129   59208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:00.032150   59208 start.go:495] detecting cgroup driver to use...
	I0719 15:48:00.032215   59208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:00.050052   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:00.063282   59208 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:00.063341   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:00.078073   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:00.092872   59208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:00.217105   59208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:00.364335   59208 docker.go:233] disabling docker service ...
	I0719 15:48:00.364403   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:00.384138   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:00.400280   59208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:00.543779   59208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:00.671512   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:00.687337   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:00.708629   59208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:00.708690   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.720508   59208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:00.720580   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.732952   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.743984   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.756129   59208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:00.766873   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.777481   59208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.799865   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.812450   59208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:00.822900   59208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:00.822964   59208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:00.836117   59208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:00.845958   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:00.959002   59208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:01.104519   59208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:01.104598   59208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:01.110652   59208 start.go:563] Will wait 60s for crictl version
	I0719 15:48:01.110711   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:48:01.114358   59208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:01.156969   59208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:01.157063   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.187963   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.219925   59208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.221101   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:48:01.224369   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:01.224789   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224989   59208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:01.229813   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:01.243714   59208 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:01.243843   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:01.243886   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:01.283013   59208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:01.283093   59208 ssh_runner.go:195] Run: which lz4
	I0719 15:48:01.287587   59208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:01.291937   59208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:01.291965   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:02.810751   59208 crio.go:462] duration metric: took 1.52319928s to copy over tarball
	I0719 15:48:02.810846   59208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:59.767270   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Start
	I0719 15:47:59.767433   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring networks are active...
	I0719 15:47:59.768056   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network default is active
	I0719 15:47:59.768371   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network mk-embed-certs-817144 is active
	I0719 15:47:59.768804   58376 main.go:141] libmachine: (embed-certs-817144) Getting domain xml...
	I0719 15:47:59.769396   58376 main.go:141] libmachine: (embed-certs-817144) Creating domain...
	I0719 15:48:01.024457   58376 main.go:141] libmachine: (embed-certs-817144) Waiting to get IP...
	I0719 15:48:01.025252   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.025697   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.025741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.025660   60153 retry.go:31] will retry after 211.260956ms: waiting for machine to come up
	I0719 15:48:01.238027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.238561   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.238588   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.238529   60153 retry.go:31] will retry after 346.855203ms: waiting for machine to come up
	I0719 15:48:01.587201   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.587773   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.587815   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.587736   60153 retry.go:31] will retry after 327.69901ms: waiting for machine to come up
	I0719 15:48:01.917433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.917899   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.917931   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.917864   60153 retry.go:31] will retry after 474.430535ms: waiting for machine to come up
	I0719 15:48:02.393610   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.394139   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.394168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.394061   60153 retry.go:31] will retry after 491.247455ms: waiting for machine to come up
	I0719 15:48:02.886826   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.887296   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.887329   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.887249   60153 retry.go:31] will retry after 661.619586ms: waiting for machine to come up
	I0719 15:48:03.550633   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:03.551175   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:03.551199   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:03.551126   60153 retry.go:31] will retry after 1.10096194s: waiting for machine to come up
	I0719 15:48:00.583866   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:02.585144   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.112520   59208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301644218s)
	I0719 15:48:05.112555   59208 crio.go:469] duration metric: took 2.301774418s to extract the tarball
	I0719 15:48:05.112565   59208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:05.151199   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:05.193673   59208 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:05.193701   59208 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:05.193712   59208 kubeadm.go:934] updating node { 192.168.61.144 8444 v1.30.3 crio true true} ...
	I0719 15:48:05.193836   59208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-601445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:05.193919   59208 ssh_runner.go:195] Run: crio config
	I0719 15:48:05.239103   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:05.239131   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:05.239146   59208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:05.239176   59208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-601445 NodeName:default-k8s-diff-port-601445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:05.239374   59208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-601445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:05.239441   59208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:05.249729   59208 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:05.249799   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:05.259540   59208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 15:48:05.277388   59208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:05.294497   59208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 15:48:05.313990   59208 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:05.318959   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:05.332278   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:05.463771   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:05.480474   59208 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445 for IP: 192.168.61.144
	I0719 15:48:05.480499   59208 certs.go:194] generating shared ca certs ...
	I0719 15:48:05.480520   59208 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:05.480674   59208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:05.480732   59208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:05.480746   59208 certs.go:256] generating profile certs ...
	I0719 15:48:05.480859   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.key
	I0719 15:48:05.480937   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key.e31ea710
	I0719 15:48:05.480992   59208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key
	I0719 15:48:05.481128   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:05.481165   59208 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:05.481180   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:05.481210   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:05.481245   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:05.481276   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:05.481334   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:05.481940   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:05.524604   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:05.562766   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:05.618041   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:05.660224   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 15:48:05.689232   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:05.713890   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:05.738923   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:05.764447   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:05.793905   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:05.823630   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:05.849454   59208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:05.868309   59208 ssh_runner.go:195] Run: openssl version
	I0719 15:48:05.874423   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:05.887310   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.891994   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.892057   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.898173   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:05.911541   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:05.922829   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927537   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927600   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.933642   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:05.946269   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:05.958798   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963899   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963959   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.969801   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:05.980966   59208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:05.985487   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:05.991303   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:05.997143   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:06.003222   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:06.008984   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:06.014939   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:06.020976   59208 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:06.021059   59208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:06.021106   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.066439   59208 cri.go:89] found id: ""
	I0719 15:48:06.066503   59208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:06.080640   59208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:06.080663   59208 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:06.080730   59208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:06.093477   59208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:06.094740   59208 kubeconfig.go:125] found "default-k8s-diff-port-601445" server: "https://192.168.61.144:8444"
	I0719 15:48:06.096907   59208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:06.107974   59208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.144
	I0719 15:48:06.108021   59208 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:06.108035   59208 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:06.108109   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.156149   59208 cri.go:89] found id: ""
	I0719 15:48:06.156222   59208 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:06.172431   59208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:06.182482   59208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:06.182511   59208 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:06.182562   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 15:48:06.192288   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:06.192361   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:06.202613   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 15:48:06.212553   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:06.212624   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:06.223086   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.233949   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:06.234007   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.247224   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 15:48:06.257851   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:06.257908   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:06.268650   59208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:06.279549   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:06.421964   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.407768   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.614213   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.686560   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.769476   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:07.769590   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.270472   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.770366   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.795057   59208 api_server.go:72] duration metric: took 1.025580277s to wait for apiserver process to appear ...
	I0719 15:48:08.795086   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:08.795112   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:08.795617   59208 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0719 15:48:09.295459   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:04.653309   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:04.653784   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:04.653846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:04.653753   60153 retry.go:31] will retry after 1.276153596s: waiting for machine to come up
	I0719 15:48:05.931365   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:05.931820   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:05.931848   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:05.931798   60153 retry.go:31] will retry after 1.372328403s: waiting for machine to come up
	I0719 15:48:07.305390   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:07.305892   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:07.305922   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:07.305850   60153 retry.go:31] will retry after 1.738311105s: waiting for machine to come up
	I0719 15:48:09.046095   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:09.046526   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:09.046558   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:09.046481   60153 retry.go:31] will retry after 2.169449629s: waiting for machine to come up
	I0719 15:48:05.084157   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:07.583246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:09.584584   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:11.457584   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.457651   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.457670   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.490130   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.490165   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.795439   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.803724   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:11.803757   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.295287   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.300002   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:12.300034   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.795285   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.800067   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:48:12.808020   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:12.808045   59208 api_server.go:131] duration metric: took 4.012952016s to wait for apiserver health ...
	I0719 15:48:12.808055   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:12.808064   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:12.810134   59208 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.812011   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:12.824520   59208 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:12.846711   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:12.855286   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:12.855315   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:12.855322   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:12.855329   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:12.855335   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:12.855345   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:12.855353   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:12.855360   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:12.855369   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:12.855377   59208 system_pods.go:74] duration metric: took 8.645314ms to wait for pod list to return data ...
	I0719 15:48:12.855390   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:12.858531   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:12.858556   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:12.858566   59208 node_conditions.go:105] duration metric: took 3.171526ms to run NodePressure ...
	I0719 15:48:12.858581   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:13.176014   59208 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180575   59208 kubeadm.go:739] kubelet initialised
	I0719 15:48:13.180602   59208 kubeadm.go:740] duration metric: took 4.561708ms waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180612   59208 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:13.187723   59208 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.204023   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204052   59208 pod_ready.go:81] duration metric: took 16.303152ms for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.204061   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204070   59208 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.212768   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212790   59208 pod_ready.go:81] duration metric: took 8.709912ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.212800   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212812   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.220452   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220474   59208 pod_ready.go:81] duration metric: took 7.650656ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.220482   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220489   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.251973   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.251997   59208 pod_ready.go:81] duration metric: took 31.499608ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.252008   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.252029   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.650914   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650940   59208 pod_ready.go:81] duration metric: took 398.904724ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.650948   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650954   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.050582   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050615   59208 pod_ready.go:81] duration metric: took 399.652069ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.050630   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050642   59208 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.450349   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450379   59208 pod_ready.go:81] duration metric: took 399.72875ms for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.450391   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450399   59208 pod_ready.go:38] duration metric: took 1.269776818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:14.450416   59208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:14.462296   59208 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:14.462318   59208 kubeadm.go:597] duration metric: took 8.38163922s to restartPrimaryControlPlane
	I0719 15:48:14.462329   59208 kubeadm.go:394] duration metric: took 8.441360513s to StartCluster
	I0719 15:48:14.462348   59208 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.462422   59208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:14.464082   59208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.464400   59208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:14.464459   59208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:14.464531   59208 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464570   59208 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.464581   59208 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:14.464592   59208 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464610   59208 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464636   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:14.464670   59208 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:14.464672   59208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-601445"
	W0719 15:48:14.464684   59208 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:14.464613   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.464740   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.465050   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465111   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465151   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465178   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465235   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.466230   59208 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:11.217150   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:11.217605   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:11.217634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:11.217561   60153 retry.go:31] will retry after 3.406637692s: waiting for machine to come up
	I0719 15:48:14.467899   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:14.481294   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0719 15:48:14.481538   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0719 15:48:14.481541   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0719 15:48:14.481658   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.482122   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482145   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482363   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482387   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482461   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482478   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482590   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482704   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482762   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482853   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.483131   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483159   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.483199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483217   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.486437   59208 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.486462   59208 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:14.486492   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.486893   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.486932   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.498388   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0719 15:48:14.498897   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0719 15:48:14.498952   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499251   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499660   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499678   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.499838   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499853   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.500068   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500168   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500232   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.500410   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.501505   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0719 15:48:14.501876   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.502391   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.502413   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.502456   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.502745   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.503006   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.503314   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.503341   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.505162   59208 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:14.505166   59208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:12.084791   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.582986   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.506465   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:14.506487   59208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:14.506506   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.506585   59208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.506604   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:14.506628   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.510227   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511092   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511134   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511207   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511231   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511257   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511370   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511390   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511570   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511574   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511662   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.511713   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511787   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511840   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.520612   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0719 15:48:14.521013   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.521451   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.521470   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.521817   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.522016   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.523622   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.523862   59208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.523876   59208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:14.523895   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.526426   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.526882   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.526941   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.527060   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.527190   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.527344   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.527439   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.674585   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:14.693700   59208 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:14.752990   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.856330   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:14.856350   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:14.884762   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:14.884784   59208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:14.895548   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.915815   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:14.915844   59208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:14.979442   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:15.098490   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098517   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098869   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.098893   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.098902   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.099141   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.099158   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.105078   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.105252   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.105506   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.105526   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.802868   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.802892   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803265   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803279   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.803285   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.803517   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803530   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803577   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.905945   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.905972   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906244   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906266   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906266   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.906275   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.906283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906484   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906496   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906511   59208 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:15.908671   59208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.910057   59208 addons.go:510] duration metric: took 1.445597408s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 15:48:16.697266   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:18.698379   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.627319   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:14.627800   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:14.627822   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:14.627767   60153 retry.go:31] will retry after 4.38444645s: waiting for machine to come up
	I0719 15:48:19.016073   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016711   58376 main.go:141] libmachine: (embed-certs-817144) Found IP for machine: 192.168.72.37
	I0719 15:48:19.016742   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has current primary IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016749   58376 main.go:141] libmachine: (embed-certs-817144) Reserving static IP address...
	I0719 15:48:19.017180   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.017204   58376 main.go:141] libmachine: (embed-certs-817144) Reserved static IP address: 192.168.72.37
	I0719 15:48:19.017222   58376 main.go:141] libmachine: (embed-certs-817144) DBG | skip adding static IP to network mk-embed-certs-817144 - found existing host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"}
	I0719 15:48:19.017239   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Getting to WaitForSSH function...
	I0719 15:48:19.017254   58376 main.go:141] libmachine: (embed-certs-817144) Waiting for SSH to be available...
	I0719 15:48:19.019511   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.019867   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.019896   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.020064   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH client type: external
	I0719 15:48:19.020080   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa (-rw-------)
	I0719 15:48:19.020107   58376 main.go:141] libmachine: (embed-certs-817144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:48:19.020115   58376 main.go:141] libmachine: (embed-certs-817144) DBG | About to run SSH command:
	I0719 15:48:19.020124   58376 main.go:141] libmachine: (embed-certs-817144) DBG | exit 0
	I0719 15:48:19.150328   58376 main.go:141] libmachine: (embed-certs-817144) DBG | SSH cmd err, output: <nil>: 
	I0719 15:48:19.150676   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetConfigRaw
	I0719 15:48:19.151317   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.154087   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154600   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.154634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154907   58376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:48:19.155143   58376 machine.go:94] provisionDockerMachine start ...
	I0719 15:48:19.155168   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:19.155369   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.157741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.158060   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158175   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.158368   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158618   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158769   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.158945   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.159144   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.159161   58376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:48:19.274836   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:48:19.274863   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275148   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:48:19.275174   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275373   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.278103   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278489   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.278518   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.278892   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279111   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279299   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.279577   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.279798   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.279815   58376 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-817144 && echo "embed-certs-817144" | sudo tee /etc/hostname
	I0719 15:48:19.413956   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-817144
	
	I0719 15:48:19.413988   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.416836   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.417196   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417408   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.417599   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417777   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417911   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.418083   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.418274   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.418290   58376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-817144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-817144/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-817144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:48:16.583538   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.083431   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.541400   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:48:19.541439   58376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:48:19.541464   58376 buildroot.go:174] setting up certificates
	I0719 15:48:19.541478   58376 provision.go:84] configureAuth start
	I0719 15:48:19.541495   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.541801   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.544209   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544579   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.544608   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544766   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.547206   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.547570   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547714   58376 provision.go:143] copyHostCerts
	I0719 15:48:19.547772   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:48:19.547782   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:48:19.547827   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:48:19.547939   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:48:19.547949   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:48:19.547969   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:48:19.548024   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:48:19.548031   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:48:19.548047   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:48:19.548093   58376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.embed-certs-817144 san=[127.0.0.1 192.168.72.37 embed-certs-817144 localhost minikube]
	I0719 15:48:20.024082   58376 provision.go:177] copyRemoteCerts
	I0719 15:48:20.024137   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:48:20.024157   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.026940   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027322   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.027358   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027541   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.027819   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.028011   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.028165   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.117563   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:48:20.144428   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:48:20.171520   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:48:20.195188   58376 provision.go:87] duration metric: took 653.6924ms to configureAuth
	I0719 15:48:20.195215   58376 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:48:20.195432   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:20.195518   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.198648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.198970   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.199007   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.199126   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.199335   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199527   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199687   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.199849   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.200046   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.200063   58376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:48:20.502753   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:48:20.502782   58376 machine.go:97] duration metric: took 1.347623735s to provisionDockerMachine
	I0719 15:48:20.502794   58376 start.go:293] postStartSetup for "embed-certs-817144" (driver="kvm2")
	I0719 15:48:20.502805   58376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:48:20.502821   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.503204   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:48:20.503248   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.506142   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.506563   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506697   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.506938   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.507125   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.507258   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.593356   58376 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:48:20.597843   58376 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:48:20.597877   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:48:20.597948   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:48:20.598048   58376 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:48:20.598164   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:48:20.607951   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:20.634860   58376 start.go:296] duration metric: took 132.043928ms for postStartSetup
	I0719 15:48:20.634900   58376 fix.go:56] duration metric: took 20.891722874s for fixHost
	I0719 15:48:20.634919   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.637846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638181   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.638218   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638439   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.638674   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.638884   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.639054   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.639256   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.639432   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.639444   58376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:48:20.755076   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404100.730818472
	
	I0719 15:48:20.755107   58376 fix.go:216] guest clock: 1721404100.730818472
	I0719 15:48:20.755115   58376 fix.go:229] Guest: 2024-07-19 15:48:20.730818472 +0000 UTC Remote: 2024-07-19 15:48:20.634903926 +0000 UTC m=+356.193225446 (delta=95.914546ms)
	I0719 15:48:20.755134   58376 fix.go:200] guest clock delta is within tolerance: 95.914546ms
	I0719 15:48:20.755139   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 21.011996674s
	I0719 15:48:20.755171   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.755465   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:20.758255   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758621   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.758644   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758861   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759348   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759545   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759656   58376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:48:20.759720   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.759780   58376 ssh_runner.go:195] Run: cat /version.json
	I0719 15:48:20.759802   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.762704   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.762833   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763161   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763202   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763399   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763493   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763545   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763608   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763693   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763772   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764001   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763996   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.764156   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764278   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.867430   58376 ssh_runner.go:195] Run: systemctl --version
	I0719 15:48:20.873463   58376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:21.029369   58376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:21.035953   58376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:21.036028   58376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:21.054352   58376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:21.054381   58376 start.go:495] detecting cgroup driver to use...
	I0719 15:48:21.054440   58376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:21.071903   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:21.088624   58376 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:21.088688   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:21.104322   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:21.120089   58376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:21.242310   58376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:21.422514   58376 docker.go:233] disabling docker service ...
	I0719 15:48:21.422589   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:21.439213   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:21.454361   58376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:21.577118   58376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:21.704150   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:21.719160   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:21.738765   58376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:21.738817   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.750720   58376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:21.750798   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.763190   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.775630   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.787727   58376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:21.799520   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.812016   58376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.830564   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.841770   58376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:21.851579   58376 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:21.851651   58376 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:21.864529   58376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:21.874301   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:21.994669   58376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:22.131448   58376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:22.131521   58376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:22.137328   58376 start.go:563] Will wait 60s for crictl version
	I0719 15:48:22.137391   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:48:22.141409   58376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:22.182947   58376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:22.183029   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.217804   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.252450   58376 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.197350   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:22.197536   59208 node_ready.go:49] node "default-k8s-diff-port-601445" has status "Ready":"True"
	I0719 15:48:22.197558   59208 node_ready.go:38] duration metric: took 7.503825721s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:22.197568   59208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:22.203380   59208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:24.211899   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:22.253862   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:22.256397   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256763   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:22.256791   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256968   58376 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:22.261184   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:22.274804   58376 kubeadm.go:883] updating cluster {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:22.274936   58376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:22.274994   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:22.317501   58376 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:22.317559   58376 ssh_runner.go:195] Run: which lz4
	I0719 15:48:22.321646   58376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:22.326455   58376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:22.326478   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:23.820083   58376 crio.go:462] duration metric: took 1.498469232s to copy over tarball
	I0719 15:48:23.820155   58376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:48:21.583230   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.585191   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.710838   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.786269   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:26.105248   58376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285062307s)
	I0719 15:48:26.105271   58376 crio.go:469] duration metric: took 2.285164513s to extract the tarball
	I0719 15:48:26.105279   58376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:26.142811   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:26.185631   58376 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:26.185660   58376 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:26.185668   58376 kubeadm.go:934] updating node { 192.168.72.37 8443 v1.30.3 crio true true} ...
	I0719 15:48:26.185784   58376 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-817144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:26.185857   58376 ssh_runner.go:195] Run: crio config
	I0719 15:48:26.238150   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:26.238172   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:26.238183   58376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:26.238211   58376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-817144 NodeName:embed-certs-817144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:26.238449   58376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-817144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:26.238515   58376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:26.249200   58376 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:26.249278   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:26.258710   58376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 15:48:26.279235   58376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:26.299469   58376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 15:48:26.317789   58376 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:26.321564   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:26.333153   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:26.452270   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:26.469344   58376 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144 for IP: 192.168.72.37
	I0719 15:48:26.469366   58376 certs.go:194] generating shared ca certs ...
	I0719 15:48:26.469382   58376 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:26.469530   58376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:26.469586   58376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:26.469601   58376 certs.go:256] generating profile certs ...
	I0719 15:48:26.469694   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/client.key
	I0719 15:48:26.469791   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key.928d4c24
	I0719 15:48:26.469846   58376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key
	I0719 15:48:26.469982   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:26.470021   58376 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:26.470035   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:26.470071   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:26.470105   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:26.470140   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:26.470197   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:26.470812   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:26.508455   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:26.537333   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:26.565167   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:26.601152   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 15:48:26.636408   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:26.669076   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:26.695438   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:26.718897   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:26.741760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:26.764760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:26.787772   58376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:26.807332   58376 ssh_runner.go:195] Run: openssl version
	I0719 15:48:26.815182   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:26.827373   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831926   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831973   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.837923   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:26.849158   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:26.860466   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865178   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865249   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.870873   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:26.882044   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:26.893283   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897750   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897809   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.903395   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:26.914389   58376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:26.918904   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:26.924659   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:26.930521   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:26.936808   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:26.942548   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:26.948139   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:26.954557   58376 kubeadm.go:392] StartCluster: {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:26.954644   58376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:26.954722   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:26.994129   58376 cri.go:89] found id: ""
	I0719 15:48:26.994205   58376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:27.006601   58376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:27.006624   58376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:27.006699   58376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:27.017166   58376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:27.018580   58376 kubeconfig.go:125] found "embed-certs-817144" server: "https://192.168.72.37:8443"
	I0719 15:48:27.021622   58376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:27.033000   58376 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.37
	I0719 15:48:27.033033   58376 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:27.033044   58376 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:27.033083   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:27.073611   58376 cri.go:89] found id: ""
	I0719 15:48:27.073678   58376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:27.092986   58376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:27.103557   58376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:27.103580   58376 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:27.103636   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:48:27.113687   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:27.113752   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:27.123696   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:48:27.132928   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:27.132984   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:27.142566   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.152286   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:27.152335   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.161701   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:48:27.171532   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:27.171591   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:27.181229   58376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:27.192232   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:27.330656   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.287561   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.513476   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.616308   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.704518   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:28.704605   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.205265   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.082992   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.746255   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.704706   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.204728   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.221741   58376 api_server.go:72] duration metric: took 1.517220815s to wait for apiserver process to appear ...
	I0719 15:48:30.221766   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:30.221786   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.665104   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.665138   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.665152   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.703238   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.703271   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.722495   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.748303   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:32.748344   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.222861   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.227076   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.227104   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.722705   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.734658   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.734683   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:34.222279   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:34.227870   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:48:34.233621   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:34.233646   58376 api_server.go:131] duration metric: took 4.011873202s to wait for apiserver health ...
	I0719 15:48:34.233656   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:34.233664   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:34.235220   58376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:30.210533   59208 pod_ready.go:92] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.210557   59208 pod_ready.go:81] duration metric: took 8.007151724s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.210568   59208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215669   59208 pod_ready.go:92] pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.215692   59208 pod_ready.go:81] duration metric: took 5.116005ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215702   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222633   59208 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.222655   59208 pod_ready.go:81] duration metric: took 6.947228ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222664   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227631   59208 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.227656   59208 pod_ready.go:81] duration metric: took 4.985227ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227667   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405047   59208 pod_ready.go:92] pod "kube-proxy-r7b2z" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.405073   59208 pod_ready.go:81] duration metric: took 177.397954ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405085   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805843   59208 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.805877   59208 pod_ready.go:81] duration metric: took 400.783803ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805890   59208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:32.821231   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.236303   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:34.248133   58376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:34.270683   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:34.279907   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:34.279939   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:34.279946   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:34.279953   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:34.279960   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:34.279966   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:34.279972   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:34.279977   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:34.279982   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:34.279988   58376 system_pods.go:74] duration metric: took 9.282886ms to wait for pod list to return data ...
	I0719 15:48:34.279995   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:34.283597   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:34.283623   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:34.283634   58376 node_conditions.go:105] duration metric: took 3.634999ms to run NodePressure ...
	I0719 15:48:34.283649   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:31.082803   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:33.583510   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.586116   58376 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590095   58376 kubeadm.go:739] kubelet initialised
	I0719 15:48:34.590119   58376 kubeadm.go:740] duration metric: took 3.977479ms waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590128   58376 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:34.594987   58376 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.600192   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600212   58376 pod_ready.go:81] duration metric: took 5.205124ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.600220   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600225   58376 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.603934   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603952   58376 pod_ready.go:81] duration metric: took 3.719853ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.603959   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603965   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.607778   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607803   58376 pod_ready.go:81] duration metric: took 3.830174ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.607817   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607826   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.673753   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673775   58376 pod_ready.go:81] duration metric: took 65.937586ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.673783   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673788   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.075506   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075539   58376 pod_ready.go:81] duration metric: took 401.743578ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.075548   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075554   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.474518   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474546   58376 pod_ready.go:81] duration metric: took 398.985628ms for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.474558   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474567   58376 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.874540   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874567   58376 pod_ready.go:81] duration metric: took 399.989978ms for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.874576   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874582   58376 pod_ready.go:38] duration metric: took 1.284443879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:35.874646   58376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:35.886727   58376 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:35.886751   58376 kubeadm.go:597] duration metric: took 8.880120513s to restartPrimaryControlPlane
	I0719 15:48:35.886760   58376 kubeadm.go:394] duration metric: took 8.932210528s to StartCluster
	I0719 15:48:35.886781   58376 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.886859   58376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:35.888389   58376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.888642   58376 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:35.888722   58376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:35.888781   58376 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-817144"
	I0719 15:48:35.888810   58376 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-817144"
	I0719 15:48:35.888824   58376 addons.go:69] Setting default-storageclass=true in profile "embed-certs-817144"
	I0719 15:48:35.888839   58376 addons.go:69] Setting metrics-server=true in profile "embed-certs-817144"
	I0719 15:48:35.888875   58376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-817144"
	I0719 15:48:35.888888   58376 addons.go:234] Setting addon metrics-server=true in "embed-certs-817144"
	W0719 15:48:35.888897   58376 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:35.888931   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.888840   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0719 15:48:35.888843   58376 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:35.889000   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.889231   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889242   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889247   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889270   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889272   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889282   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.890641   58376 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:35.892144   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:35.905134   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0719 15:48:35.905572   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.905788   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0719 15:48:35.906107   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906132   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.906171   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.906496   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.906825   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906846   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.907126   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.907179   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.907215   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.907289   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.908269   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0719 15:48:35.908747   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.909343   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.909367   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.909787   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.910337   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910382   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.910615   58376 addons.go:234] Setting addon default-storageclass=true in "embed-certs-817144"
	W0719 15:48:35.910632   58376 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:35.910662   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.910937   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910965   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.926165   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 15:48:35.926905   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.926944   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0719 15:48:35.927369   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.927573   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927636   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927829   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927847   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927959   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928512   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.928551   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.928759   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928824   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 15:48:35.928964   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.929176   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.929546   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.929557   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.929927   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.930278   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.931161   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.931773   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.933234   58376 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:35.933298   58376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:35.934543   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:35.934556   58376 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:35.934569   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.934629   58376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:35.934642   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:35.934657   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.938300   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938628   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.938648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938679   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939150   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939340   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.939433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.939479   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939536   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.939619   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939673   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.939937   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.940081   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.940190   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.947955   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0719 15:48:35.948206   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.948643   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.948654   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.948961   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.949119   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.950572   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.951770   58376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:35.951779   58376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:35.951791   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.957009   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957381   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.957405   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957550   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.957717   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.957841   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.957953   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:36.072337   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:36.091547   58376 node_ready.go:35] waiting up to 6m0s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:36.182328   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:36.195704   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:36.195729   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:36.221099   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:36.224606   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:36.224632   58376 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:36.247264   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:36.247289   58376 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:36.300365   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:37.231670   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010526005s)
	I0719 15:48:37.231729   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231743   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.231765   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049406285s)
	I0719 15:48:37.231807   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231822   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232034   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232085   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232096   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.232100   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232105   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.232115   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232345   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232366   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233486   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233529   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233541   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.233549   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.233792   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233815   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233832   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.240487   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.240502   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.240732   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.240754   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.240755   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288064   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288085   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288370   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288389   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288378   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288400   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288406   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288595   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288606   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288652   58376 addons.go:475] Verifying addon metrics-server=true in "embed-certs-817144"
	I0719 15:48:37.290497   58376 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.314792   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.814653   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.291961   58376 addons.go:510] duration metric: took 1.403238435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:48:38.096793   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.584345   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.585215   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.818959   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.313745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:44.314213   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:40.596246   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.095976   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.595640   58376 node_ready.go:49] node "embed-certs-817144" has status "Ready":"True"
	I0719 15:48:43.595659   58376 node_ready.go:38] duration metric: took 7.504089345s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:43.595667   58376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:43.600832   58376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605878   58376 pod_ready.go:92] pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.605900   58376 pod_ready.go:81] duration metric: took 5.046391ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605912   58376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610759   58376 pod_ready.go:92] pod "etcd-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.610778   58376 pod_ready.go:81] duration metric: took 4.85915ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610788   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615239   58376 pod_ready.go:92] pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.615257   58376 pod_ready.go:81] duration metric: took 4.46126ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615267   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619789   58376 pod_ready.go:92] pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.619804   58376 pod_ready.go:81] duration metric: took 4.530085ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619814   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998585   58376 pod_ready.go:92] pod "kube-proxy-4d4g9" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.998612   58376 pod_ready.go:81] duration metric: took 378.78761ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998622   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:40.084033   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.582983   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.812904   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.313178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:46.004415   58376 pod_ready.go:102] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.006304   58376 pod_ready.go:92] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:48.006329   58376 pod_ready.go:81] duration metric: took 4.00769937s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:48.006339   58376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:45.082973   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:47.582224   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.582782   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:51.814049   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:53.815503   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:50.015637   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:52.515491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:51.583726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.083179   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.816000   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.817771   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:55.014213   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.014730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:56.083381   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.088572   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:00.313552   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:02.812079   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:59.513087   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:01.514094   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.013514   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:00.583159   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:03.082968   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:05.312525   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.812891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:06.013654   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:08.015552   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:05.083931   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.583371   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:09.824389   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.312960   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.512671   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.513359   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.082891   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:14.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:14.813090   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.311701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:15.014386   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.513993   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:16.584566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.082569   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:19.814129   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.814762   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:23.817102   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:20.012767   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:22.512467   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.587074   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:24.082829   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:26.312496   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.312687   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.015437   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:27.514515   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:26.084854   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.584103   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:30.313153   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:32.812075   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:29.514963   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.515163   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.014174   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.083793   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:33.083838   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:34.812542   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.311929   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.312244   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:36.513892   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.013261   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:35.084098   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.587696   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:41.313207   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.815916   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:41.013495   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.513445   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:40.082726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:42.583599   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.584503   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:46.313534   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.811536   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:46.012299   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.515396   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:47.082848   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:49.083291   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.813781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:52.817124   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.516602   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.012716   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:51.083390   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.583030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:55.312032   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.813778   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:55.013719   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.014070   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:56.083506   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:58.582593   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:59.815894   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.312541   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.513158   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.013500   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:00.583268   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:03.082967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:04.814326   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.314104   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:04.513144   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.013900   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.014269   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.582967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.583076   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.583550   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:09.813831   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.815120   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.815551   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.512872   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.514351   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.584717   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.082745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.815701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:17.816052   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.012834   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.014504   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.582156   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.583011   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:20.312912   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:22.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:20.513572   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.014103   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:21.082689   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.583483   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:25.312127   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.312599   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.512955   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.515102   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.583597   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:28.083843   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:29.815683   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.312009   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.312309   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.013332   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.013381   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.082937   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:36.812745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.312184   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.513321   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:36.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.012035   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:35.084310   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:37.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:41.313263   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.816257   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:41.014458   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.017012   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:40.083591   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:42.582246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:44.582857   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:46.312320   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.312805   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.512849   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.013822   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:46.582906   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.583537   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:50.815488   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.312626   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:50.013996   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:52.514493   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:51.082358   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.582566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:50:55.814460   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.313739   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:55.014039   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:57.513248   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:56.082876   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.583172   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:00.812445   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.813629   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.011751   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.013062   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.013473   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.584028   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:03.082149   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:05.312865   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.816945   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:06.513634   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.012283   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:05.084185   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.583429   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.583944   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:10.315941   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:12.812732   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.013749   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.513338   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.584335   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:14.083745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:15.311404   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:17.312317   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.013193   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:18.014317   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.583403   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.082807   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.812659   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.813178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.311781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:20.512610   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:22.512707   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.083030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:23.583501   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:26.312416   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.313406   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.513171   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:27.012377   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:29.014890   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.583785   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.083633   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:30.811822   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.813013   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:31.512155   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.012636   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:30.083916   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.582845   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.582945   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.313638   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:37.813400   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.013415   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.513387   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.583140   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:39.084770   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:40.312909   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:42.812703   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.011956   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:43.513117   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.584336   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.082447   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.813328   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:47.318119   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.013597   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.513037   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.083435   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.582222   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:51:49.811847   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:51.812747   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.312028   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.514497   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:53.012564   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.585244   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:52.587963   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.576923   58417 pod_ready.go:81] duration metric: took 4m0.000887015s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	E0719 15:51:54.576954   58417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 15:51:54.576979   58417 pod_ready.go:38] duration metric: took 4m10.045017696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:51:54.577013   58417 kubeadm.go:597] duration metric: took 4m18.572474217s to restartPrimaryControlPlane
	W0719 15:51:54.577075   58417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:54.577107   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:56.314112   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:58.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:55.012915   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:57.512491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:01.312620   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:03.812880   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:59.512666   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:02.013784   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.314545   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:08.811891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:04.512583   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:09.016808   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:10.813197   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:13.313167   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:11.513329   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:14.012352   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:15.812105   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:17.812843   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:16.014362   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:18.513873   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:20.685347   58417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.108209289s)
	I0719 15:52:20.685431   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:20.699962   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:20.709728   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:20.719022   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:20.719038   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:52:20.719074   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:52:20.727669   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:52:20.727731   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:52:20.736851   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:52:20.745821   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:52:20.745867   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:52:20.755440   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.764307   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:52:20.764360   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.773759   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:52:20.782354   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:52:20.782420   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:52:20.791186   58417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:20.837700   58417 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 15:52:20.837797   58417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:52:20.958336   58417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:20.958486   58417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:20.958629   58417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 15:52:20.967904   58417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:20.969995   58417 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:20.970097   58417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:52:20.970197   58417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:20.970325   58417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:52:20.970438   58417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:52:20.970550   58417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:52:20.970633   58417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:52:20.970740   58417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:52:20.970840   58417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:52:20.970949   58417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:52:20.971049   58417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:52:20.971106   58417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:52:20.971184   58417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:21.175226   58417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:21.355994   58417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 15:52:21.453237   58417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:21.569014   58417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:21.672565   58417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:21.673036   58417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:21.675860   58417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:20.312428   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:22.312770   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:24.314183   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.013099   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:23.512341   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.677594   58417 out.go:204]   - Booting up control plane ...
	I0719 15:52:21.677694   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:21.677787   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:21.677894   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:21.695474   58417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:21.701352   58417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:21.701419   58417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:52:21.831941   58417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 15:52:21.832046   58417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 15:52:22.333073   58417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.399393ms
	I0719 15:52:22.333184   58417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 15:52:27.336964   58417 kubeadm.go:310] [api-check] The API server is healthy after 5.002306078s
	I0719 15:52:27.348152   58417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:27.366916   58417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:27.396214   58417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:27.396475   58417 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-382231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:27.408607   58417 kubeadm.go:310] [bootstrap-token] Using token: xdoy2n.29347ekmgral9ki3
	I0719 15:52:27.409857   58417 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:27.409991   58417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:27.415553   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:27.424772   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:27.428421   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:27.439922   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:27.443985   58417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:27.742805   58417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:28.253742   58417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 15:52:28.744380   58417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 15:52:28.744405   58417 kubeadm.go:310] 
	I0719 15:52:28.744486   58417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:28.744498   58417 kubeadm.go:310] 
	I0719 15:52:28.744581   58417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:28.744588   58417 kubeadm.go:310] 
	I0719 15:52:28.744633   58417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 15:52:28.744704   58417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:28.744783   58417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:28.744794   58417 kubeadm.go:310] 
	I0719 15:52:28.744877   58417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 15:52:28.744891   58417 kubeadm.go:310] 
	I0719 15:52:28.744944   58417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:28.744951   58417 kubeadm.go:310] 
	I0719 15:52:28.744992   58417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 15:52:28.745082   58417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:28.745172   58417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:28.745181   58417 kubeadm.go:310] 
	I0719 15:52:28.745253   58417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:28.745319   58417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 15:52:28.745332   58417 kubeadm.go:310] 
	I0719 15:52:28.745412   58417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745499   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 15:52:28.745518   58417 kubeadm.go:310] 	--control-plane 
	I0719 15:52:28.745525   58417 kubeadm.go:310] 
	I0719 15:52:28.745599   58417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:28.745609   58417 kubeadm.go:310] 
	I0719 15:52:28.745677   58417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745778   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 15:52:28.747435   58417 kubeadm.go:310] W0719 15:52:20.814208    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747697   58417 kubeadm.go:310] W0719 15:52:20.814905    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747795   58417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:28.747815   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:52:28.747827   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:52:28.749619   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:26.813409   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.814040   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:25.513048   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:27.514730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.750992   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:28.762976   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:52:28.783894   58417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:28.783972   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.783989   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-382231 minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=no-preload-382231 minikube.k8s.io/primary=true
	I0719 15:52:28.808368   58417 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:29.005658   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.505702   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.005765   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.505834   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.005837   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.506329   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.006419   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.505701   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.005735   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.130121   58417 kubeadm.go:1113] duration metric: took 4.346215264s to wait for elevateKubeSystemPrivileges
	I0719 15:52:33.130162   58417 kubeadm.go:394] duration metric: took 4m57.173876302s to StartCluster
	I0719 15:52:33.130187   58417 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.130290   58417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:52:33.131944   58417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.132178   58417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:52:33.132237   58417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:52:33.132339   58417 addons.go:69] Setting storage-provisioner=true in profile "no-preload-382231"
	I0719 15:52:33.132358   58417 addons.go:69] Setting default-storageclass=true in profile "no-preload-382231"
	I0719 15:52:33.132381   58417 addons.go:234] Setting addon storage-provisioner=true in "no-preload-382231"
	I0719 15:52:33.132385   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0719 15:52:33.132391   58417 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:52:33.132392   58417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-382231"
	I0719 15:52:33.132419   58417 addons.go:69] Setting metrics-server=true in profile "no-preload-382231"
	I0719 15:52:33.132423   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132444   58417 addons.go:234] Setting addon metrics-server=true in "no-preload-382231"
	W0719 15:52:33.132452   58417 addons.go:243] addon metrics-server should already be in state true
	I0719 15:52:33.132474   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132740   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132763   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132799   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132810   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132822   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132829   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.134856   58417 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:33.136220   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:33.149028   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0719 15:52:33.149128   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0719 15:52:33.149538   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.149646   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.150093   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150108   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150111   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150119   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150477   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150603   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150955   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.150971   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.151326   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 15:52:33.151359   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.151715   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.152199   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.152223   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.152574   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.153136   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.153170   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.155187   58417 addons.go:234] Setting addon default-storageclass=true in "no-preload-382231"
	W0719 15:52:33.155207   58417 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:52:33.155235   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.155572   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.155602   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.170886   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0719 15:52:33.170884   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 15:52:33.171439   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.171510   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0719 15:52:33.171543   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172005   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172026   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172109   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172141   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172162   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172538   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172552   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172609   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172775   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.172831   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172875   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.173021   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.173381   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.173405   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.175118   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.175500   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.177023   58417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:52:33.177041   58417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:33.178348   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:33.178362   58417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:33.178377   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.178450   58417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.178469   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:52:33.178486   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.182287   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182598   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.182617   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182741   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.182948   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.183074   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.183204   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.183372   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183940   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.183959   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183994   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.184237   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.184356   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.184505   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.191628   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 15:52:33.191984   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.192366   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.192385   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.192707   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.192866   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.194285   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.194485   58417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.194499   58417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:33.194514   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.197526   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.197853   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.197872   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.198087   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.198335   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.198472   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.198604   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.382687   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:52:33.403225   58417 node_ready.go:35] waiting up to 6m0s for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430507   58417 node_ready.go:49] node "no-preload-382231" has status "Ready":"True"
	I0719 15:52:33.430535   58417 node_ready.go:38] duration metric: took 27.282654ms for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430546   58417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:33.482352   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.555210   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.565855   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:33.565874   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:52:33.571653   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.609541   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:33.609569   58417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:33.674428   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:33.674455   58417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:33.746703   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:34.092029   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092051   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092341   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092359   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.092369   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092379   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092604   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092628   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.092634   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.093766   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.093785   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094025   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094043   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094076   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.094088   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094325   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094343   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094349   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128393   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.128412   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.128715   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128766   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.128775   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.319737   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.319764   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320141   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320161   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320165   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.320184   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.320199   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320441   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320462   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320475   58417 addons.go:475] Verifying addon metrics-server=true in "no-preload-382231"
	I0719 15:52:34.320482   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.322137   58417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:52:30.812091   59208 pod_ready.go:81] duration metric: took 4m0.006187238s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:30.812113   59208 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:30.812120   59208 pod_ready.go:38] duration metric: took 4m8.614544303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.812135   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:30.812161   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:30.812208   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:30.861054   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:30.861074   59208 cri.go:89] found id: ""
	I0719 15:52:30.861083   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:30.861144   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.865653   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:30.865708   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:30.900435   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:30.900459   59208 cri.go:89] found id: ""
	I0719 15:52:30.900468   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:30.900512   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.904686   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:30.904747   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:30.950618   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.950638   59208 cri.go:89] found id: ""
	I0719 15:52:30.950646   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:30.950691   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.955080   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:30.955147   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:30.996665   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:30.996691   59208 cri.go:89] found id: ""
	I0719 15:52:30.996704   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:30.996778   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.001122   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:31.001191   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:31.042946   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.042969   59208 cri.go:89] found id: ""
	I0719 15:52:31.042979   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:31.043039   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.047311   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:31.047365   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:31.086140   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.086166   59208 cri.go:89] found id: ""
	I0719 15:52:31.086175   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:31.086230   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.091742   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:31.091818   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:31.134209   59208 cri.go:89] found id: ""
	I0719 15:52:31.134241   59208 logs.go:276] 0 containers: []
	W0719 15:52:31.134252   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:31.134260   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:31.134316   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:31.173297   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.173325   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.173331   59208 cri.go:89] found id: ""
	I0719 15:52:31.173353   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:31.173414   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.177951   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.182099   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:31.182121   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:31.196541   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:31.196565   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:31.322528   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:31.322555   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:31.369628   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:31.369658   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:31.417834   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:31.417867   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:31.459116   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:31.459145   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.500986   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:31.501018   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.578557   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:31.578606   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:31.635053   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:31.635082   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:31.692604   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:31.692635   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.729765   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:31.729801   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.766152   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:31.766177   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:32.301240   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:32.301278   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.013083   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:32.013142   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:34.323358   58417 addons.go:510] duration metric: took 1.19112329s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:34.849019   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:34.866751   59208 api_server.go:72] duration metric: took 4m20.402312557s to wait for apiserver process to appear ...
	I0719 15:52:34.866779   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:34.866816   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:34.866876   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:34.905505   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.905532   59208 cri.go:89] found id: ""
	I0719 15:52:34.905542   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:34.905609   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.910996   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:34.911069   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:34.958076   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:34.958100   59208 cri.go:89] found id: ""
	I0719 15:52:34.958110   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:34.958166   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.962439   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:34.962507   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:34.999095   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:34.999117   59208 cri.go:89] found id: ""
	I0719 15:52:34.999126   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:34.999178   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.003785   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:35.003848   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:35.042585   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.042613   59208 cri.go:89] found id: ""
	I0719 15:52:35.042622   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:35.042683   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.048705   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:35.048770   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:35.092408   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.092435   59208 cri.go:89] found id: ""
	I0719 15:52:35.092444   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:35.092499   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.096983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:35.097050   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:35.135694   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.135717   59208 cri.go:89] found id: ""
	I0719 15:52:35.135726   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:35.135782   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.140145   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:35.140223   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:35.178912   59208 cri.go:89] found id: ""
	I0719 15:52:35.178938   59208 logs.go:276] 0 containers: []
	W0719 15:52:35.178948   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:35.178955   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:35.179015   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:35.229067   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.229090   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.229104   59208 cri.go:89] found id: ""
	I0719 15:52:35.229112   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:35.229172   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.234985   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.240098   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:35.240120   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:35.299418   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:35.299449   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:35.316294   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:35.316330   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:35.433573   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:35.433610   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:35.479149   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:35.479181   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:35.526270   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:35.526299   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.564209   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:35.564241   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.601985   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:35.602020   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.669986   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:35.670015   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.711544   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:35.711580   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:35.763800   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:35.763831   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:35.822699   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:35.822732   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.863377   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:35.863422   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:38.777749   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:52:38.781984   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:52:38.782935   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:38.782955   59208 api_server.go:131] duration metric: took 3.916169938s to wait for apiserver health ...
	I0719 15:52:38.782963   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:38.782983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:38.783026   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:38.818364   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:38.818387   59208 cri.go:89] found id: ""
	I0719 15:52:38.818395   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:38.818442   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.823001   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:38.823054   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:38.857871   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:38.857900   59208 cri.go:89] found id: ""
	I0719 15:52:38.857909   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:38.857958   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.864314   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:38.864375   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:38.910404   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:38.910434   59208 cri.go:89] found id: ""
	I0719 15:52:38.910445   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:38.910505   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.915588   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:38.915645   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:38.952981   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:38.953002   59208 cri.go:89] found id: ""
	I0719 15:52:38.953009   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:38.953055   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.957397   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:38.957447   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:39.002973   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.003001   59208 cri.go:89] found id: ""
	I0719 15:52:39.003011   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:39.003059   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.007496   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:39.007568   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:39.045257   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.045282   59208 cri.go:89] found id: ""
	I0719 15:52:39.045291   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:39.045351   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.049358   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:39.049415   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:39.083263   59208 cri.go:89] found id: ""
	I0719 15:52:39.083303   59208 logs.go:276] 0 containers: []
	W0719 15:52:39.083314   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:39.083321   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:39.083391   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:39.121305   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.121348   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.121354   59208 cri.go:89] found id: ""
	I0719 15:52:39.121363   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:39.121421   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.126259   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.130395   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:39.130413   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:39.171213   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:39.171239   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.206545   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:39.206577   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:39.267068   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:39.267105   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:39.373510   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:39.373544   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.512374   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.012559   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:39.013766   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:35.495479   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.989424   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:38.489746   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.489775   58417 pod_ready.go:81] duration metric: took 5.007393051s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.489790   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495855   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.495884   58417 pod_ready.go:81] duration metric: took 6.085398ms for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495895   58417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:40.502651   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:41.503286   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.503309   58417 pod_ready.go:81] duration metric: took 3.007406201s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.503321   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513225   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.513245   58417 pod_ready.go:81] duration metric: took 9.916405ms for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513256   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517651   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.517668   58417 pod_ready.go:81] duration metric: took 4.40518ms for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517677   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522529   58417 pod_ready.go:92] pod "kube-proxy-qd84x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.522544   58417 pod_ready.go:81] duration metric: took 4.861257ms for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522551   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687964   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.687987   58417 pod_ready.go:81] duration metric: took 165.428951ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687997   58417 pod_ready.go:38] duration metric: took 8.257437931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:41.688016   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:41.688069   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:41.705213   58417 api_server.go:72] duration metric: took 8.573000368s to wait for apiserver process to appear ...
	I0719 15:52:41.705236   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:41.705256   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:52:41.709425   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:52:41.710427   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:52:41.710447   58417 api_server.go:131] duration metric: took 5.203308ms to wait for apiserver health ...
	I0719 15:52:41.710455   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:41.890063   58417 system_pods.go:59] 9 kube-system pods found
	I0719 15:52:41.890091   58417 system_pods.go:61] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:41.890095   58417 system_pods.go:61] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:41.890099   58417 system_pods.go:61] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:41.890103   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:41.890106   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:41.890109   58417 system_pods.go:61] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:41.890112   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:41.890117   58417 system_pods.go:61] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:41.890121   58417 system_pods.go:61] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:41.890128   58417 system_pods.go:74] duration metric: took 179.666477ms to wait for pod list to return data ...
	I0719 15:52:41.890135   58417 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.086946   58417 default_sa.go:45] found service account: "default"
	I0719 15:52:42.086973   58417 default_sa.go:55] duration metric: took 196.832888ms for default service account to be created ...
	I0719 15:52:42.086984   58417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.289457   58417 system_pods.go:86] 9 kube-system pods found
	I0719 15:52:42.289483   58417 system_pods.go:89] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:42.289489   58417 system_pods.go:89] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:42.289493   58417 system_pods.go:89] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:42.289498   58417 system_pods.go:89] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:42.289502   58417 system_pods.go:89] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:42.289506   58417 system_pods.go:89] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:42.289510   58417 system_pods.go:89] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:42.289518   58417 system_pods.go:89] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.289523   58417 system_pods.go:89] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:42.289530   58417 system_pods.go:126] duration metric: took 202.54151ms to wait for k8s-apps to be running ...
	I0719 15:52:42.289536   58417 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.289575   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.304866   58417 system_svc.go:56] duration metric: took 15.319153ms WaitForService to wait for kubelet
	I0719 15:52:42.304931   58417 kubeadm.go:582] duration metric: took 9.172718104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.304958   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.488087   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.488108   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.488122   58417 node_conditions.go:105] duration metric: took 183.159221ms to run NodePressure ...
	I0719 15:52:42.488135   58417 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.488144   58417 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.488157   58417 start.go:255] writing updated cluster config ...
	I0719 15:52:42.488453   58417 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.536465   58417 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:52:42.538606   58417 out.go:177] * Done! kubectl is now configured to use "no-preload-382231" cluster and "default" namespace by default
	I0719 15:52:39.422000   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:39.422034   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:39.473826   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:39.473860   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:39.515998   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:39.516023   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:39.559475   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:39.559506   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:39.574174   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:39.574205   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.615906   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:39.615933   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.676764   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:39.676795   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.714437   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:39.714467   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:42.584088   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:42.584114   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.584119   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.584123   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.584127   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.584130   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.584133   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.584138   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.584143   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.584150   59208 system_pods.go:74] duration metric: took 3.801182741s to wait for pod list to return data ...
	I0719 15:52:42.584156   59208 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.586910   59208 default_sa.go:45] found service account: "default"
	I0719 15:52:42.586934   59208 default_sa.go:55] duration metric: took 2.771722ms for default service account to be created ...
	I0719 15:52:42.586943   59208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.593611   59208 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:42.593634   59208 system_pods.go:89] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.593639   59208 system_pods.go:89] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.593645   59208 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.593650   59208 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.593654   59208 system_pods.go:89] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.593658   59208 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.593669   59208 system_pods.go:89] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.593673   59208 system_pods.go:89] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.593680   59208 system_pods.go:126] duration metric: took 6.731347ms to wait for k8s-apps to be running ...
	I0719 15:52:42.593687   59208 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.593726   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.615811   59208 system_svc.go:56] duration metric: took 22.114487ms WaitForService to wait for kubelet
	I0719 15:52:42.615841   59208 kubeadm.go:582] duration metric: took 4m28.151407807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.615864   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.619021   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.619040   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.619050   59208 node_conditions.go:105] duration metric: took 3.180958ms to run NodePressure ...
	I0719 15:52:42.619060   59208 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.619067   59208 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.619079   59208 start.go:255] writing updated cluster config ...
	I0719 15:52:42.619329   59208 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.677117   59208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:42.679317   59208 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-601445" cluster and "default" namespace by default
	I0719 15:52:41.514013   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:44.012173   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:46.013717   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:48.013121   58376 pod_ready.go:81] duration metric: took 4m0.006772624s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:48.013143   58376 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:48.013150   58376 pod_ready.go:38] duration metric: took 4m4.417474484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:48.013165   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:48.013194   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:48.013234   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:48.067138   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.067166   58376 cri.go:89] found id: ""
	I0719 15:52:48.067175   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:48.067218   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.071486   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:48.071531   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:48.115491   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.115514   58376 cri.go:89] found id: ""
	I0719 15:52:48.115525   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:48.115583   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.119693   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:48.119750   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:48.161158   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.161185   58376 cri.go:89] found id: ""
	I0719 15:52:48.161194   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:48.161257   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.165533   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:48.165584   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:48.207507   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.207528   58376 cri.go:89] found id: ""
	I0719 15:52:48.207537   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:48.207596   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.212070   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:48.212145   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:48.250413   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.250441   58376 cri.go:89] found id: ""
	I0719 15:52:48.250451   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:48.250510   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.255025   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:48.255095   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:48.289898   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.289922   58376 cri.go:89] found id: ""
	I0719 15:52:48.289930   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:48.289976   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.294440   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:48.294489   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:48.329287   58376 cri.go:89] found id: ""
	I0719 15:52:48.329314   58376 logs.go:276] 0 containers: []
	W0719 15:52:48.329326   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:48.329332   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:48.329394   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:48.373215   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.373242   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.373248   58376 cri.go:89] found id: ""
	I0719 15:52:48.373257   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:48.373311   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.377591   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.381610   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:48.381635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:48.440106   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:48.440148   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:48.455200   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:48.455234   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.496729   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:48.496757   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.535475   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:48.535501   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.592954   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:48.592993   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.635925   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:48.635957   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.671611   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:48.671642   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:48.809648   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:48.809681   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.863327   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:48.863361   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.902200   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:48.902245   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.937497   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:48.937525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:49.446900   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:49.446933   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:51.988535   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:52.005140   58376 api_server.go:72] duration metric: took 4m16.116469116s to wait for apiserver process to appear ...
	I0719 15:52:52.005165   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:52.005206   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:52.005258   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:52.041113   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.041143   58376 cri.go:89] found id: ""
	I0719 15:52:52.041150   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:52.041199   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.045292   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:52.045349   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:52.086747   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.086770   58376 cri.go:89] found id: ""
	I0719 15:52:52.086778   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:52.086821   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.091957   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:52.092015   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:52.128096   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.128128   58376 cri.go:89] found id: ""
	I0719 15:52:52.128138   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:52.128204   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.132889   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:52.132949   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:52.168359   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.168389   58376 cri.go:89] found id: ""
	I0719 15:52:52.168398   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:52.168454   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.172577   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:52.172639   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:52.211667   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.211684   58376 cri.go:89] found id: ""
	I0719 15:52:52.211691   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:52.211740   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.215827   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:52.215893   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:52.252105   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.252130   58376 cri.go:89] found id: ""
	I0719 15:52:52.252140   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:52.252194   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.256407   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:52.256464   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:52.292646   58376 cri.go:89] found id: ""
	I0719 15:52:52.292675   58376 logs.go:276] 0 containers: []
	W0719 15:52:52.292685   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:52.292693   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:52.292755   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:52.326845   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.326875   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.326880   58376 cri.go:89] found id: ""
	I0719 15:52:52.326889   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:52.326946   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.331338   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.335530   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:52.335554   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.371981   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:52.372010   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.406921   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:52.406946   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.442975   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:52.443007   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:52.497838   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:52.497873   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:52.556739   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:52.556776   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:52.665610   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:52.665643   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.711547   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:52.711580   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.759589   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:52.759634   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.807300   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:52.807374   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.857159   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:52.857186   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.917896   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:52.917931   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:53.342603   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:53.342646   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:55.857727   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:52:55.861835   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:52:55.862804   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:55.862822   58376 api_server.go:131] duration metric: took 3.857650801s to wait for apiserver health ...
	I0719 15:52:55.862829   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:55.862852   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:55.862905   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:55.900840   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:55.900859   58376 cri.go:89] found id: ""
	I0719 15:52:55.900866   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:55.900909   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.906205   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:55.906291   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:55.950855   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:55.950879   58376 cri.go:89] found id: ""
	I0719 15:52:55.950887   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:55.950939   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.955407   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:55.955472   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:55.994954   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:55.994981   58376 cri.go:89] found id: ""
	I0719 15:52:55.994992   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:55.995052   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.999179   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:55.999241   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:56.036497   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.036521   58376 cri.go:89] found id: ""
	I0719 15:52:56.036530   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:56.036585   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.041834   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:56.041900   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:56.082911   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.082934   58376 cri.go:89] found id: ""
	I0719 15:52:56.082943   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:56.082998   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.087505   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:56.087571   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:56.124517   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.124544   58376 cri.go:89] found id: ""
	I0719 15:52:56.124554   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:56.124616   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.129221   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:56.129297   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:56.170151   58376 cri.go:89] found id: ""
	I0719 15:52:56.170177   58376 logs.go:276] 0 containers: []
	W0719 15:52:56.170193   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:56.170212   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:56.170292   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:56.218351   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:56.218377   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.218381   58376 cri.go:89] found id: ""
	I0719 15:52:56.218388   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:56.218437   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.223426   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.227742   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:56.227759   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.271701   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:56.271733   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:56.325333   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:56.325366   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:56.431391   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:56.431423   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:56.485442   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:56.485472   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:56.527493   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:56.527525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.563260   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:56.563289   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.600604   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:56.600635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.656262   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:56.656305   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:57.031511   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:57.031549   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:57.046723   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:57.046748   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:57.083358   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:57.083390   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:57.124108   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:57.124136   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:59.670804   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:59.670831   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.670836   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.670840   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.670844   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.670847   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.670850   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.670855   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.670859   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.670865   58376 system_pods.go:74] duration metric: took 3.808031391s to wait for pod list to return data ...
	I0719 15:52:59.670871   58376 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:59.673231   58376 default_sa.go:45] found service account: "default"
	I0719 15:52:59.673249   58376 default_sa.go:55] duration metric: took 2.372657ms for default service account to be created ...
	I0719 15:52:59.673255   58376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:59.678267   58376 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:59.678289   58376 system_pods.go:89] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.678296   58376 system_pods.go:89] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.678303   58376 system_pods.go:89] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.678310   58376 system_pods.go:89] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.678315   58376 system_pods.go:89] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.678322   58376 system_pods.go:89] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.678331   58376 system_pods.go:89] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.678341   58376 system_pods.go:89] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.678352   58376 system_pods.go:126] duration metric: took 5.090968ms to wait for k8s-apps to be running ...
	I0719 15:52:59.678362   58376 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:59.678411   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:59.695116   58376 system_svc.go:56] duration metric: took 16.750228ms WaitForService to wait for kubelet
	I0719 15:52:59.695139   58376 kubeadm.go:582] duration metric: took 4m23.806469478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:59.695163   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:59.697573   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:59.697592   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:59.697602   58376 node_conditions.go:105] duration metric: took 2.433643ms to run NodePressure ...
	I0719 15:52:59.697612   58376 start.go:241] waiting for startup goroutines ...
	I0719 15:52:59.697618   58376 start.go:246] waiting for cluster config update ...
	I0719 15:52:59.697629   58376 start.go:255] writing updated cluster config ...
	I0719 15:52:59.697907   58376 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:59.744965   58376 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:59.746888   58376 out.go:177] * Done! kubectl is now configured to use "embed-certs-817144" cluster and "default" namespace by default
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.807697210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404921807669021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20f14d41-0f20-44dc-aed3-864485b247de name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.808188795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e539021-eb87-4478-b397-3ffed7e999eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.808240751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e539021-eb87-4478-b397-3ffed7e999eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.808434513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e539021-eb87-4478-b397-3ffed7e999eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.850104386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b7f27af-5a25-4419-b750-8188a006857b name=/runtime.v1.RuntimeService/Version
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.850179119Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b7f27af-5a25-4419-b750-8188a006857b name=/runtime.v1.RuntimeService/Version
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.851283681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60e9e629-b2da-4262-bf5a-7517b88cf605 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.851708768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404921851687073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60e9e629-b2da-4262-bf5a-7517b88cf605 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.852131454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42f9bf2a-367a-4ae9-9f1f-064381a5bcdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.852188511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42f9bf2a-367a-4ae9-9f1f-064381a5bcdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.852381254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42f9bf2a-367a-4ae9-9f1f-064381a5bcdb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.898088572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ea3aac6-d0a9-40a4-98ef-0d10dcd9d702 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.901887199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ea3aac6-d0a9-40a4-98ef-0d10dcd9d702 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.905186224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88349375-2300-43a8-9826-09acc5ddcad4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.905652811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404921905585496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88349375-2300-43a8-9826-09acc5ddcad4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.906257318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09bf5014-edfa-478e-ba79-01ff9cde37a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.906443554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09bf5014-edfa-478e-ba79-01ff9cde37a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.906736501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09bf5014-edfa-478e-ba79-01ff9cde37a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.940666167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0025290-505e-4a3d-b1fe-429e8212fe89 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.940768253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0025290-505e-4a3d-b1fe-429e8212fe89 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.941819841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f836f70e-6d5f-4d75-becf-f05347f94c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.942262874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721404921942230177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f836f70e-6d5f-4d75-becf-f05347f94c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.942811573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02ed78d5-0afe-4c97-964a-48dfaad3d1f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.942907653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02ed78d5-0afe-4c97-964a-48dfaad3d1f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:02:01 embed-certs-817144 crio[729]: time="2024-07-19 16:02:01.943101067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02ed78d5-0afe-4c97-964a-48dfaad3d1f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33ca90d25224c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   3dd1229229ae7       storage-provisioner
	4856880cbbbc2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   342d3a594dfc1       busybox
	79faf7b7b4478       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   1df5728cfdc2a       coredns-7db6d8ff4d-n945p
	4ab77ba1bf35a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   3dd1229229ae7       storage-provisioner
	760d42fba7d1a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   56445467b2f3b       kube-proxy-4d4g9
	b5cdfd8260b76       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   322c44f99e83b       etcd-embed-certs-817144
	f82d9ede0d89b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   9edba2c454442       kube-scheduler-embed-certs-817144
	e92e20675555d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   d2f3f20eab4c0       kube-apiserver-embed-certs-817144
	4c26eb67ddb9a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   1ddec419a83ee       kube-controller-manager-embed-certs-817144
	
	
	==> coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44502 - 40098 "HINFO IN 710626314888396658.4129644044510388121. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023242924s
	
	
	==> describe nodes <==
	Name:               embed-certs-817144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-817144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=embed-certs-817144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_38_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:38:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-817144
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 16:02:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 15:59:14 +0000   Fri, 19 Jul 2024 15:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 15:59:14 +0000   Fri, 19 Jul 2024 15:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 15:59:14 +0000   Fri, 19 Jul 2024 15:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 15:59:14 +0000   Fri, 19 Jul 2024 15:48:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.37
	  Hostname:    embed-certs-817144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 437afac46acd4383a51d50d4eabced8c
	  System UUID:                437afac4-6acd-4383-a51d-50d4eabced8c
	  Boot ID:                    c07ff149-525e-46a5-8746-fb724d8ffcc8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-n945p                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-embed-certs-817144                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-embed-certs-817144             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-embed-certs-817144    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-4d4g9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-embed-certs-817144             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-569cc877fc-2tsch               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node embed-certs-817144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node embed-certs-817144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node embed-certs-817144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node embed-certs-817144 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node embed-certs-817144 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node embed-certs-817144 event: Registered Node embed-certs-817144 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-817144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-817144 event: Registered Node embed-certs-817144 in Controller
	
	
	==> dmesg <==
	[Jul19 15:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051481] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758070] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.325203] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619492] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.079673] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.062225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071331] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.205090] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.123084] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.292113] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.454412] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.063316] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.967858] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.600107] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.940097] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.806844] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.458162] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] <==
	{"level":"info","ts":"2024-07-19T15:48:30.156201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 switched to configuration voters=(6139628887946177795)"}
	{"level":"info","ts":"2024-07-19T15:48:30.156304Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ea1c0389329f2e90","local-member-id":"553457f1c6c22d03","added-peer-id":"553457f1c6c22d03","added-peer-peer-urls":["https://192.168.72.37:2380"]}
	{"level":"info","ts":"2024-07-19T15:48:30.156436Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ea1c0389329f2e90","local-member-id":"553457f1c6c22d03","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:48:30.156536Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:48:30.158405Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.37:2380"}
	{"level":"info","ts":"2024-07-19T15:48:30.158434Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.37:2380"}
	{"level":"info","ts":"2024-07-19T15:48:30.159064Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"553457f1c6c22d03","initial-advertise-peer-urls":["https://192.168.72.37:2380"],"listen-peer-urls":["https://192.168.72.37:2380"],"advertise-client-urls":["https://192.168.72.37:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.37:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:48:30.159112Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:48:31.401249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:48:31.401304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:48:31.401361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 received MsgPreVoteResp from 553457f1c6c22d03 at term 2"}
	{"level":"info","ts":"2024-07-19T15:48:31.401376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.401382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 received MsgVoteResp from 553457f1c6c22d03 at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.401391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.401412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 553457f1c6c22d03 elected leader 553457f1c6c22d03 at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.403059Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"553457f1c6c22d03","local-member-attributes":"{Name:embed-certs-817144 ClientURLs:[https://192.168.72.37:2379]}","request-path":"/0/members/553457f1c6c22d03/attributes","cluster-id":"ea1c0389329f2e90","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:48:31.403106Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:48:31.403301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:48:31.403353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:48:31.403388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:48:31.405072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.37:2379"}
	{"level":"info","ts":"2024-07-19T15:48:31.405576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T15:58:31.434089Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2024-07-19T15:58:31.44487Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":864,"took":"9.975076ms","hash":1624098574,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-19T15:58:31.444952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1624098574,"revision":864,"compact-revision":-1}
	
	
	==> kernel <==
	 16:02:02 up 13 min,  0 users,  load average: 0.08, 0.07, 0.04
	Linux embed-certs-817144 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] <==
	I0719 15:56:33.642054       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:58:32.643809       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:58:32.644148       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 15:58:33.644437       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:58:33.644490       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 15:58:33.644500       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:58:33.644556       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:58:33.644778       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 15:58:33.645897       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:59:33.645148       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:59:33.645252       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 15:59:33.645264       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 15:59:33.646237       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 15:59:33.646316       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 15:59:33.646349       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:01:33.646073       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:01:33.646363       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:01:33.646398       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:01:33.646474       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:01:33.646554       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:01:33.648326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] <==
	I0719 15:56:15.788112       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:56:45.338698       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:56:45.795790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:57:15.344144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:57:15.804566       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:57:45.349126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:57:45.812968       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:58:15.353557       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:58:15.821265       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:58:45.358784       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:58:45.829143       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 15:59:15.363825       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:59:15.838906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 15:59:39.738238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="227.605µs"
	E0719 15:59:45.369462       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 15:59:45.846025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 15:59:50.738857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="150.751µs"
	E0719 16:00:15.373940       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:00:15.854376       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:00:45.380556       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:00:45.862392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:01:15.386210       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:01:15.869856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:01:45.396108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:01:45.881394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] <==
	I0719 15:48:34.241079       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:48:34.258100       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.37"]
	I0719 15:48:34.318820       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:48:34.318856       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:48:34.318871       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:48:34.321474       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:48:34.321780       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:48:34.321810       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:34.324352       1 config.go:192] "Starting service config controller"
	I0719 15:48:34.324390       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:48:34.324414       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:48:34.324418       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:48:34.327056       1 config.go:319] "Starting node config controller"
	I0719 15:48:34.327067       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:48:34.424912       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:48:34.425017       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:48:34.427901       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] <==
	I0719 15:48:30.831312       1 serving.go:380] Generated self-signed cert in-memory
	I0719 15:48:32.738517       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:48:32.739959       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:32.757455       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:48:32.758057       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0719 15:48:32.758109       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0719 15:48:32.758153       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 15:48:32.758812       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:48:32.767429       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:48:32.760123       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0719 15:48:32.769686       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0719 15:48:32.859266       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0719 15:48:32.867985       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:48:32.870084       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 19 15:59:28 embed-certs-817144 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 15:59:28 embed-certs-817144 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 15:59:28 embed-certs-817144 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 15:59:28 embed-certs-817144 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 15:59:39 embed-certs-817144 kubelet[940]: E0719 15:59:39.722039     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 15:59:50 embed-certs-817144 kubelet[940]: E0719 15:59:50.723946     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:00:05 embed-certs-817144 kubelet[940]: E0719 16:00:05.720504     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:00:19 embed-certs-817144 kubelet[940]: E0719 16:00:19.721286     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:00:28 embed-certs-817144 kubelet[940]: E0719 16:00:28.746501     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:00:28 embed-certs-817144 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:00:28 embed-certs-817144 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:00:28 embed-certs-817144 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:00:28 embed-certs-817144 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:00:30 embed-certs-817144 kubelet[940]: E0719 16:00:30.721537     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:00:45 embed-certs-817144 kubelet[940]: E0719 16:00:45.721032     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:00:59 embed-certs-817144 kubelet[940]: E0719 16:00:59.721191     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:01:11 embed-certs-817144 kubelet[940]: E0719 16:01:11.721375     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:01:26 embed-certs-817144 kubelet[940]: E0719 16:01:26.722555     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:01:28 embed-certs-817144 kubelet[940]: E0719 16:01:28.747984     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:01:28 embed-certs-817144 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:01:28 embed-certs-817144 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:01:28 embed-certs-817144 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:01:28 embed-certs-817144 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:01:41 embed-certs-817144 kubelet[940]: E0719 16:01:41.720330     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:01:55 embed-certs-817144 kubelet[940]: E0719 16:01:55.720855     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	
	
	==> storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] <==
	I0719 15:49:05.033921       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:49:05.044590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:49:05.044954       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 15:49:22.449913       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 15:49:22.450791       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-817144_acfafd32-0778-426e-b469-e48471974d10!
	I0719 15:49:22.452435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6981e41-8a53-4193-8752-45c6c930dbfe", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-817144_acfafd32-0778-426e-b469-e48471974d10 became leader
	I0719 15:49:22.551709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-817144_acfafd32-0778-426e-b469-e48471974d10!
	
	
	==> storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] <==
	I0719 15:48:34.206575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 15:49:04.214828       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817144 -n embed-certs-817144
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-817144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2tsch
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-817144 describe pod metrics-server-569cc877fc-2tsch
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-817144 describe pod metrics-server-569cc877fc-2tsch: exit status 1 (58.185814ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2tsch" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-817144 describe pod metrics-server-569cc877fc-2tsch: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
E0719 15:57:29.032287   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
E0719 15:57:31.796832   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
E0719 15:59:28.744420   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
E0719 16:02:29.032448   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
E0719 16:04:28.744542   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (225.141001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-862924" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (225.34942ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-862924 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-862924 logs -n 25: (1.598871985s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-127438 -- sudo                         | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-127438                                 | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-574044                           | kubernetes-upgrade-574044    | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:44:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:44:39.385142   59208 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:44:39.385249   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385257   59208 out.go:304] Setting ErrFile to fd 2...
	I0719 15:44:39.385261   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385405   59208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:44:39.385919   59208 out.go:298] Setting JSON to false
	I0719 15:44:39.386767   59208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5225,"bootTime":1721398654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:44:39.386817   59208 start.go:139] virtualization: kvm guest
	I0719 15:44:39.390104   59208 out.go:177] * [default-k8s-diff-port-601445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:44:39.391867   59208 notify.go:220] Checking for updates...
	I0719 15:44:39.391890   59208 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:44:39.393463   59208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:44:39.394883   59208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:44:39.396081   59208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:44:39.397280   59208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:44:39.398540   59208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:44:39.400177   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:44:39.400543   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.400601   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.415749   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0719 15:44:39.416104   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.416644   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.416664   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.416981   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.417206   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.417443   59208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:44:39.417751   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.417793   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.432550   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0719 15:44:39.433003   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.433478   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.433504   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.433836   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.434083   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.467474   59208 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:44:38.674498   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:39.468897   59208 start.go:297] selected driver: kvm2
	I0719 15:44:39.468921   59208 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.469073   59208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:44:39.470083   59208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.470178   59208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:44:39.485232   59208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:44:39.485586   59208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:44:39.485616   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:44:39.485624   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:44:39.485666   59208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.485752   59208 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.487537   59208 out.go:177] * Starting "default-k8s-diff-port-601445" primary control-plane node in "default-k8s-diff-port-601445" cluster
	I0719 15:44:39.488672   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:44:39.488709   59208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:44:39.488718   59208 cache.go:56] Caching tarball of preloaded images
	I0719 15:44:39.488795   59208 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:44:39.488807   59208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:44:39.488895   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:44:39.489065   59208 start.go:360] acquireMachinesLock for default-k8s-diff-port-601445: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:44:41.746585   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:47.826521   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:50.898507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:56.978531   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:00.050437   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:06.130631   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:09.202570   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:15.282481   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:18.354537   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:24.434488   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:27.506515   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:33.586522   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:36.658503   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:42.738573   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:45.810538   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:51.890547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:54.962507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:01.042509   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:04.114621   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:10.194576   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:13.266450   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:19.346524   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:22.418506   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:28.498553   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:31.570507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:37.650477   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:40.722569   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:46.802495   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:49.874579   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:55.954547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:59.026454   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:47:02.030619   58417 start.go:364] duration metric: took 4m36.939495617s to acquireMachinesLock for "no-preload-382231"
	I0719 15:47:02.030679   58417 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:02.030685   58417 fix.go:54] fixHost starting: 
	I0719 15:47:02.031010   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:02.031039   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:02.046256   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0719 15:47:02.046682   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:02.047151   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:47:02.047178   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:02.047573   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:02.047818   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:02.048023   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:47:02.049619   58417 fix.go:112] recreateIfNeeded on no-preload-382231: state=Stopped err=<nil>
	I0719 15:47:02.049641   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	W0719 15:47:02.049785   58417 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:02.051800   58417 out.go:177] * Restarting existing kvm2 VM for "no-preload-382231" ...
	I0719 15:47:02.028090   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:02.028137   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028489   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:47:02.028517   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:47:02.030488   58376 machine.go:97] duration metric: took 4m37.428160404s to provisionDockerMachine
	I0719 15:47:02.030529   58376 fix.go:56] duration metric: took 4m37.450063037s for fixHost
	I0719 15:47:02.030535   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 4m37.450081944s
	W0719 15:47:02.030559   58376 start.go:714] error starting host: provision: host is not running
	W0719 15:47:02.030673   58376 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 15:47:02.030686   58376 start.go:729] Will try again in 5 seconds ...
	I0719 15:47:02.053160   58417 main.go:141] libmachine: (no-preload-382231) Calling .Start
	I0719 15:47:02.053325   58417 main.go:141] libmachine: (no-preload-382231) Ensuring networks are active...
	I0719 15:47:02.054289   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network default is active
	I0719 15:47:02.054786   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network mk-no-preload-382231 is active
	I0719 15:47:02.055259   58417 main.go:141] libmachine: (no-preload-382231) Getting domain xml...
	I0719 15:47:02.056202   58417 main.go:141] libmachine: (no-preload-382231) Creating domain...
	I0719 15:47:03.270495   58417 main.go:141] libmachine: (no-preload-382231) Waiting to get IP...
	I0719 15:47:03.271595   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.272074   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.272151   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.272057   59713 retry.go:31] will retry after 239.502065ms: waiting for machine to come up
	I0719 15:47:03.513745   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.514224   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.514264   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.514191   59713 retry.go:31] will retry after 315.982717ms: waiting for machine to come up
	I0719 15:47:03.831739   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.832155   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.832187   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.832111   59713 retry.go:31] will retry after 468.820113ms: waiting for machine to come up
	I0719 15:47:04.302865   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.303273   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.303306   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.303236   59713 retry.go:31] will retry after 526.764683ms: waiting for machine to come up
	I0719 15:47:04.832048   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.832551   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.832583   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.832504   59713 retry.go:31] will retry after 754.533212ms: waiting for machine to come up
	I0719 15:47:07.032310   58376 start.go:360] acquireMachinesLock for embed-certs-817144: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:05.588374   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:05.588834   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:05.588862   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:05.588785   59713 retry.go:31] will retry after 757.18401ms: waiting for machine to come up
	I0719 15:47:06.347691   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:06.348135   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:06.348164   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:06.348053   59713 retry.go:31] will retry after 1.097437331s: waiting for machine to come up
	I0719 15:47:07.446836   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:07.447199   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:07.447219   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:07.447158   59713 retry.go:31] will retry after 1.448513766s: waiting for machine to come up
	I0719 15:47:08.897886   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:08.898289   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:08.898317   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:08.898216   59713 retry.go:31] will retry after 1.583843671s: waiting for machine to come up
	I0719 15:47:10.483476   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:10.483934   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:10.483963   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:10.483864   59713 retry.go:31] will retry after 1.86995909s: waiting for machine to come up
	I0719 15:47:12.355401   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:12.355802   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:12.355827   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:12.355762   59713 retry.go:31] will retry after 2.577908462s: waiting for machine to come up
	I0719 15:47:14.934837   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:14.935263   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:14.935285   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:14.935225   59713 retry.go:31] will retry after 3.158958575s: waiting for machine to come up
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:18.095456   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095912   58417 main.go:141] libmachine: (no-preload-382231) Found IP for machine: 192.168.39.227
	I0719 15:47:18.095936   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has current primary IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095942   58417 main.go:141] libmachine: (no-preload-382231) Reserving static IP address...
	I0719 15:47:18.096317   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.096357   58417 main.go:141] libmachine: (no-preload-382231) Reserved static IP address: 192.168.39.227
	I0719 15:47:18.096376   58417 main.go:141] libmachine: (no-preload-382231) DBG | skip adding static IP to network mk-no-preload-382231 - found existing host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"}
	I0719 15:47:18.096392   58417 main.go:141] libmachine: (no-preload-382231) DBG | Getting to WaitForSSH function...
	I0719 15:47:18.096407   58417 main.go:141] libmachine: (no-preload-382231) Waiting for SSH to be available...
	I0719 15:47:18.098619   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.098978   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.099008   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.099122   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH client type: external
	I0719 15:47:18.099151   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa (-rw-------)
	I0719 15:47:18.099183   58417 main.go:141] libmachine: (no-preload-382231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:18.099196   58417 main.go:141] libmachine: (no-preload-382231) DBG | About to run SSH command:
	I0719 15:47:18.099210   58417 main.go:141] libmachine: (no-preload-382231) DBG | exit 0
	I0719 15:47:18.222285   58417 main.go:141] libmachine: (no-preload-382231) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:18.222607   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetConfigRaw
	I0719 15:47:18.223181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.225751   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226062   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.226105   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226327   58417 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:47:18.226504   58417 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:18.226520   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:18.226684   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.228592   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.228936   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.228960   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.229094   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.229246   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229398   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229516   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.229663   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.229887   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.229901   58417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:18.330731   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:18.330764   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331053   58417 buildroot.go:166] provisioning hostname "no-preload-382231"
	I0719 15:47:18.331084   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331282   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.333905   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334212   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.334270   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334331   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.334510   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334705   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334850   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.335030   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.335216   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.335230   58417 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-382231 && echo "no-preload-382231" | sudo tee /etc/hostname
	I0719 15:47:18.453128   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-382231
	
	I0719 15:47:18.453151   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.455964   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456323   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.456349   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456549   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.456822   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457010   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457158   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.457300   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.457535   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.457561   58417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-382231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-382231/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-382231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:18.568852   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:18.568878   58417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:18.568902   58417 buildroot.go:174] setting up certificates
	I0719 15:47:18.568915   58417 provision.go:84] configureAuth start
	I0719 15:47:18.568924   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.569240   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.571473   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.571757   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.571783   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.572029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.573941   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574213   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.574247   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574393   58417 provision.go:143] copyHostCerts
	I0719 15:47:18.574455   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:18.574465   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:18.574528   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:18.574615   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:18.574622   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:18.574645   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:18.574696   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:18.574703   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:18.574722   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:18.574768   58417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.no-preload-382231 san=[127.0.0.1 192.168.39.227 localhost minikube no-preload-382231]
	I0719 15:47:18.636408   58417 provision.go:177] copyRemoteCerts
	I0719 15:47:18.636458   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:18.636477   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.638719   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639021   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.639054   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639191   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.639379   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.639532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.639795   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:18.720305   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:18.742906   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:18.764937   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:47:18.787183   58417 provision.go:87] duration metric: took 218.257504ms to configureAuth
	I0719 15:47:18.787205   58417 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:18.787355   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:47:18.787418   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.789685   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.789992   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.790017   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.790181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.790366   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790632   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.790770   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.790929   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.790943   58417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:19.053326   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:19.053350   58417 machine.go:97] duration metric: took 826.83404ms to provisionDockerMachine
	I0719 15:47:19.053364   58417 start.go:293] postStartSetup for "no-preload-382231" (driver="kvm2")
	I0719 15:47:19.053379   58417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:19.053409   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.053733   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:19.053755   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.056355   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056709   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.056737   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056884   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.057037   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.057172   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.057370   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.136785   58417 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:19.140756   58417 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:19.140777   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:19.140847   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:19.140941   58417 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:19.141044   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:19.150247   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:19.172800   58417 start.go:296] duration metric: took 119.424607ms for postStartSetup
	I0719 15:47:19.172832   58417 fix.go:56] duration metric: took 17.142146552s for fixHost
	I0719 15:47:19.172849   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.175427   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.175816   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.175851   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.176027   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.176281   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176636   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.176892   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:19.177051   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:19.177061   58417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:19.278564   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404039.251890495
	
	I0719 15:47:19.278594   58417 fix.go:216] guest clock: 1721404039.251890495
	I0719 15:47:19.278605   58417 fix.go:229] Guest: 2024-07-19 15:47:19.251890495 +0000 UTC Remote: 2024-07-19 15:47:19.172835531 +0000 UTC m=+294.220034318 (delta=79.054964ms)
	I0719 15:47:19.278651   58417 fix.go:200] guest clock delta is within tolerance: 79.054964ms
	I0719 15:47:19.278659   58417 start.go:83] releasing machines lock for "no-preload-382231", held for 17.247997118s
	I0719 15:47:19.278692   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.279029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:19.281674   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282034   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.282063   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282221   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282750   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282935   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282991   58417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:19.283061   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.283095   58417 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:19.283116   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.285509   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285805   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.285828   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285846   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285959   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286182   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286276   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.286300   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.286468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286481   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286632   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.286672   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286806   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286935   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.363444   58417 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:19.387514   58417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:19.545902   58417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:19.551747   58417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:19.551812   58417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:19.568563   58417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:19.568589   58417 start.go:495] detecting cgroup driver to use...
	I0719 15:47:19.568654   58417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:19.589440   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:19.604889   58417 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:19.604962   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:19.624114   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:19.638265   58417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:19.752880   58417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:19.900078   58417 docker.go:233] disabling docker service ...
	I0719 15:47:19.900132   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:19.914990   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:19.928976   58417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:20.079363   58417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:20.203629   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:20.218502   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:20.237028   58417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 15:47:20.237089   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.248514   58417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:20.248597   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.260162   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.272166   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.283341   58417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:20.294687   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.305495   58417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.328024   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.339666   58417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:20.349271   58417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:20.349314   58417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:20.364130   58417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:20.376267   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:20.501259   58417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:20.643763   58417 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:20.643828   58417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:20.648525   58417 start.go:563] Will wait 60s for crictl version
	I0719 15:47:20.648586   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:20.652256   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:20.689386   58417 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:20.689468   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.720662   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.751393   58417 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:20.752939   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:20.755996   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756367   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:20.756395   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756723   58417 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:20.760962   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:20.776973   58417 kubeadm.go:883] updating cluster {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:20.777084   58417 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 15:47:20.777120   58417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:20.814520   58417 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 15:47:20.814547   58417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:20.814631   58417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:20.814650   58417 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.814657   58417 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.814682   58417 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.814637   58417 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.814736   58417 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.814808   58417 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.814742   58417 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.816435   58417 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.816446   58417 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.816513   58417 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.816535   58417 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816559   58417 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.816719   58417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:21.003845   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 15:47:21.028954   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.039628   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.041391   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.065499   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.084966   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.142812   58417 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 15:47:21.142873   58417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 15:47:21.142905   58417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.142921   58417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 15:47:21.142939   58417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.142962   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142877   58417 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.143025   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142983   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.160141   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.182875   58417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 15:47:21.182918   58417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.182945   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.182958   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.182957   58417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 15:47:21.182992   58417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.183029   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.183044   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.183064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.272688   58417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 15:47:21.272724   58417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.272768   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.272783   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272825   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.272876   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272906   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 15:47:21.272931   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.272971   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:21.272997   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 15:47:21.273064   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:21.326354   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326356   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.326441   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 15:47:21.326457   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326459   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326492   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326497   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.326529   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 15:47:21.326535   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 15:47:21.326633   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.363401   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:21.363496   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:22.268448   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.010876   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.684346805s)
	I0719 15:47:24.010910   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 15:47:24.010920   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.684439864s)
	I0719 15:47:24.010952   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 15:47:24.010930   58417 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.010993   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.684342001s)
	I0719 15:47:24.011014   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.011019   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011046   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.647533327s)
	I0719 15:47:24.011066   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011098   58417 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742620594s)
	I0719 15:47:24.011137   58417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 15:47:24.011170   58417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.011204   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:27.292973   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.281931356s)
	I0719 15:47:27.293008   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 15:47:27.293001   58417 ssh_runner.go:235] Completed: which crictl: (3.281778521s)
	I0719 15:47:27.293043   58417 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:27.293064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:27.293086   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:29.269642   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976526914s)
	I0719 15:47:29.269676   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 15:47:29.269698   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269641   58417 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.97655096s)
	I0719 15:47:29.269748   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269773   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 15:47:29.269875   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:31.242199   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.972421845s)
	I0719 15:47:31.242257   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 15:47:31.242273   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972374564s)
	I0719 15:47:31.242283   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:31.242306   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 15:47:31.242334   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:32.592736   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.350379333s)
	I0719 15:47:32.592762   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 15:47:32.592782   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:32.592817   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:34.547084   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954243196s)
	I0719 15:47:34.547122   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 15:47:34.547155   58417 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:34.547231   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.759098   59208 start.go:364] duration metric: took 2m59.27000152s to acquireMachinesLock for "default-k8s-diff-port-601445"
	I0719 15:47:38.759165   59208 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:38.759176   59208 fix.go:54] fixHost starting: 
	I0719 15:47:38.759633   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:38.759685   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:38.779587   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0719 15:47:38.779979   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:38.780480   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:47:38.780497   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:38.780888   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:38.781129   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:38.781260   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:47:38.782786   59208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601445: state=Stopped err=<nil>
	I0719 15:47:38.782860   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	W0719 15:47:38.783056   59208 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:38.785037   59208 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-601445" ...
	I0719 15:47:38.786497   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Start
	I0719 15:47:38.786691   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring networks are active...
	I0719 15:47:38.787520   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network default is active
	I0719 15:47:38.787819   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network mk-default-k8s-diff-port-601445 is active
	I0719 15:47:38.788418   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Getting domain xml...
	I0719 15:47:38.789173   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Creating domain...
	I0719 15:47:35.191148   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 15:47:35.191193   58417 cache_images.go:123] Successfully loaded all cached images
	I0719 15:47:35.191198   58417 cache_images.go:92] duration metric: took 14.376640053s to LoadCachedImages
	I0719 15:47:35.191209   58417 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0-beta.0 crio true true} ...
	I0719 15:47:35.191329   58417 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-382231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:35.191424   58417 ssh_runner.go:195] Run: crio config
	I0719 15:47:35.236248   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:35.236276   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:35.236288   58417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:35.236309   58417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-382231 NodeName:no-preload-382231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:47:35.236464   58417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-382231"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:35.236525   58417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 15:47:35.247524   58417 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:35.247611   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:35.257583   58417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 15:47:35.275057   58417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 15:47:35.291468   58417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 15:47:35.308021   58417 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:35.312121   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:35.324449   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:35.451149   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:35.477844   58417 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231 for IP: 192.168.39.227
	I0719 15:47:35.477868   58417 certs.go:194] generating shared ca certs ...
	I0719 15:47:35.477887   58417 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:35.478043   58417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:35.478093   58417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:35.478103   58417 certs.go:256] generating profile certs ...
	I0719 15:47:35.478174   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.key
	I0719 15:47:35.478301   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key.46f9a235
	I0719 15:47:35.478339   58417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key
	I0719 15:47:35.478482   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:35.478520   58417 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:35.478530   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:35.478549   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:35.478569   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:35.478591   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:35.478628   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:35.479291   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:35.523106   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:35.546934   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:35.585616   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:35.617030   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:47:35.641486   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:47:35.680051   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:35.703679   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:47:35.728088   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:35.751219   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:35.774149   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:35.796985   58417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:35.813795   58417 ssh_runner.go:195] Run: openssl version
	I0719 15:47:35.819568   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:35.830350   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834792   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834847   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.840531   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:35.851584   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:35.862655   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867139   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867199   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.872916   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:35.883986   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:35.894795   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899001   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899049   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.904496   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:35.915180   58417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:35.919395   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:35.926075   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:35.931870   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:35.938089   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:35.944079   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:35.950449   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:35.956291   58417 kubeadm.go:392] StartCluster: {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:35.956396   58417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:35.956452   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:35.993976   58417 cri.go:89] found id: ""
	I0719 15:47:35.994047   58417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:36.004507   58417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:36.004532   58417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:36.004579   58417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:36.014644   58417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:36.015628   58417 kubeconfig.go:125] found "no-preload-382231" server: "https://192.168.39.227:8443"
	I0719 15:47:36.017618   58417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:36.027252   58417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0719 15:47:36.027281   58417 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:36.027292   58417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:36.027350   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:36.066863   58417 cri.go:89] found id: ""
	I0719 15:47:36.066934   58417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:36.082971   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:36.092782   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:36.092802   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:36.092841   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:36.101945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:36.101998   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:36.111368   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:36.120402   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:36.120447   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:36.130124   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.138945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:36.138990   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.148176   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:36.157008   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:36.157060   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:36.166273   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:36.176032   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:36.291855   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.285472   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.476541   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.547807   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.652551   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:37.652649   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.153088   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.653690   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.718826   58417 api_server.go:72] duration metric: took 1.066275053s to wait for apiserver process to appear ...
	I0719 15:47:38.718858   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:47:38.718891   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:41.984204   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:41.984237   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:41.984255   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.031024   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:42.031055   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:42.219815   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.256851   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.256888   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:42.719015   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.756668   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.756705   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.219173   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.255610   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:43.255645   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.719116   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.725453   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:47:43.739070   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:47:43.739108   58417 api_server.go:131] duration metric: took 5.020238689s to wait for apiserver health ...
	I0719 15:47:43.739119   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:43.739128   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:43.741458   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:47:40.069048   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting to get IP...
	I0719 15:47:40.069866   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070409   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070480   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.070379   59996 retry.go:31] will retry after 299.168281ms: waiting for machine to come up
	I0719 15:47:40.370939   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371381   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.371340   59996 retry.go:31] will retry after 388.345842ms: waiting for machine to come up
	I0719 15:47:40.761301   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762861   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762889   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.762797   59996 retry.go:31] will retry after 305.39596ms: waiting for machine to come up
	I0719 15:47:41.070215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070791   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070823   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.070746   59996 retry.go:31] will retry after 452.50233ms: waiting for machine to come up
	I0719 15:47:41.525465   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.525997   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.526019   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.525920   59996 retry.go:31] will retry after 686.050268ms: waiting for machine to come up
	I0719 15:47:42.214012   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214513   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214545   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:42.214465   59996 retry.go:31] will retry after 867.815689ms: waiting for machine to come up
	I0719 15:47:43.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084240   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:43.084198   59996 retry.go:31] will retry after 1.006018507s: waiting for machine to come up
	I0719 15:47:44.092571   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093050   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:44.092992   59996 retry.go:31] will retry after 961.604699ms: waiting for machine to come up
	I0719 15:47:43.743125   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:47:43.780558   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:47:43.825123   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:47:43.849564   58417 system_pods.go:59] 8 kube-system pods found
	I0719 15:47:43.849608   58417 system_pods.go:61] "coredns-5cfdc65f69-9p4dr" [b6744bc9-b683-4f7e-b506-a95eb58ac308] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:47:43.849620   58417 system_pods.go:61] "etcd-no-preload-382231" [1f2704ae-84a0-4636-9826-f6bb5d2cb8b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:47:43.849632   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [e4ae90fb-9024-4420-9249-6f936ff43894] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:47:43.849643   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [ceb3538d-a6b9-4135-b044-b139003baf35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:47:43.849650   58417 system_pods.go:61] "kube-proxy-z2z9r" [fdc0eb8f-2884-436b-ba1e-4c71107f756c] Running
	I0719 15:47:43.849657   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [5ae3221b-7186-4dbe-9b1b-fb4c8c239c62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:47:43.849677   58417 system_pods.go:61] "metrics-server-78fcd8795b-zwr8g" [4d4de9aa-89f2-4cf4-85c2-26df25bd82c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:47:43.849687   58417 system_pods.go:61] "storage-provisioner" [ab5ce17f-a0da-4ab7-803e-245ba4363d09] Running
	I0719 15:47:43.849696   58417 system_pods.go:74] duration metric: took 24.54438ms to wait for pod list to return data ...
	I0719 15:47:43.849709   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:47:43.864512   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:47:43.864636   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:47:43.864684   58417 node_conditions.go:105] duration metric: took 14.967708ms to run NodePressure ...
	I0719 15:47:43.864727   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:44.524399   58417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531924   58417 kubeadm.go:739] kubelet initialised
	I0719 15:47:44.531944   58417 kubeadm.go:740] duration metric: took 7.516197ms waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531952   58417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:47:44.538016   58417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:45.055856   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056318   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056347   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:45.056263   59996 retry.go:31] will retry after 1.300059023s: waiting for machine to come up
	I0719 15:47:46.357875   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358379   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358407   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:46.358331   59996 retry.go:31] will retry after 2.269558328s: waiting for machine to come up
	I0719 15:47:48.630965   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631641   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631674   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:48.631546   59996 retry.go:31] will retry after 2.829487546s: waiting for machine to come up
	I0719 15:47:47.449778   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:48.045481   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:48.045508   58417 pod_ready.go:81] duration metric: took 3.507466621s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.045521   58417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.463569   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464003   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:51.463968   59996 retry.go:31] will retry after 2.917804786s: waiting for machine to come up
	I0719 15:47:54.383261   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383967   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383993   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:54.383924   59996 retry.go:31] will retry after 4.044917947s: waiting for machine to come up
	I0719 15:47:50.052168   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:51.052114   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:51.052135   58417 pod_ready.go:81] duration metric: took 3.006607122s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:51.052144   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059540   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:52.059563   58417 pod_ready.go:81] duration metric: took 1.007411773s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059576   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.066338   58417 pod_ready.go:102] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:54.567056   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.567076   58417 pod_ready.go:81] duration metric: took 2.507493559s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.567085   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571655   58417 pod_ready.go:92] pod "kube-proxy-z2z9r" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.571672   58417 pod_ready.go:81] duration metric: took 4.581191ms for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571680   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.575983   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.576005   58417 pod_ready.go:81] duration metric: took 4.315788ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.576017   58417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.432420   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432945   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Found IP for machine: 192.168.61.144
	I0719 15:47:58.432976   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has current primary IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432988   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserving static IP address...
	I0719 15:47:58.433361   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.433395   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | skip adding static IP to network mk-default-k8s-diff-port-601445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"}
	I0719 15:47:58.433412   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserved static IP address: 192.168.61.144
	I0719 15:47:58.433430   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for SSH to be available...
	I0719 15:47:58.433442   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Getting to WaitForSSH function...
	I0719 15:47:58.435448   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435770   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.435807   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435868   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH client type: external
	I0719 15:47:58.435930   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa (-rw-------)
	I0719 15:47:58.435973   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:58.435992   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | About to run SSH command:
	I0719 15:47:58.436002   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | exit 0
	I0719 15:47:58.562187   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:58.562564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetConfigRaw
	I0719 15:47:58.563233   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.565694   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566042   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.566066   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566301   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:47:58.566469   59208 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:58.566489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:58.566684   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.569109   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569485   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.569512   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569594   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.569763   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.569912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.570022   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.570167   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.570398   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.570412   59208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:58.675164   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:58.675217   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675455   59208 buildroot.go:166] provisioning hostname "default-k8s-diff-port-601445"
	I0719 15:47:58.675487   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.678103   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678522   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.678564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678721   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.678908   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679074   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679198   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.679345   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.679516   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.679531   59208 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-601445 && echo "default-k8s-diff-port-601445" | sudo tee /etc/hostname
	I0719 15:47:58.802305   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-601445
	
	I0719 15:47:58.802336   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.805215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805582   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.805613   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805796   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.805981   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806139   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806322   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.806517   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.806689   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.806706   59208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-601445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-601445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-601445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:58.919959   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:58.919985   59208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:58.920019   59208 buildroot.go:174] setting up certificates
	I0719 15:47:58.920031   59208 provision.go:84] configureAuth start
	I0719 15:47:58.920041   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.920283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.922837   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923193   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.923225   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923413   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.925832   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926128   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.926156   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926297   59208 provision.go:143] copyHostCerts
	I0719 15:47:58.926360   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:58.926374   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:58.926425   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:58.926512   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:58.926520   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:58.926543   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:58.926600   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:58.926609   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:58.926630   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:58.926682   59208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-601445 san=[127.0.0.1 192.168.61.144 default-k8s-diff-port-601445 localhost minikube]
	I0719 15:47:59.080911   59208 provision.go:177] copyRemoteCerts
	I0719 15:47:59.080966   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:59.080990   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084029   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.084059   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084219   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.084411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.084531   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.084674   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.172754   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:59.198872   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 15:47:59.222898   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:47:59.246017   59208 provision.go:87] duration metric: took 325.975105ms to configureAuth
	I0719 15:47:59.246037   59208 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:59.246215   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:47:59.246312   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.248757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249079   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.249111   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249354   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.249526   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249679   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249779   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.249924   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.250142   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.250161   59208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:59.743101   58376 start.go:364] duration metric: took 52.710718223s to acquireMachinesLock for "embed-certs-817144"
	I0719 15:47:59.743169   58376 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:59.743177   58376 fix.go:54] fixHost starting: 
	I0719 15:47:59.743553   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:59.743591   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:59.760837   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0719 15:47:59.761216   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:59.761734   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:47:59.761754   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:59.762080   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:59.762291   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:47:59.762504   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:47:59.764044   58376 fix.go:112] recreateIfNeeded on embed-certs-817144: state=Stopped err=<nil>
	I0719 15:47:59.764067   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	W0719 15:47:59.764217   58376 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:59.766063   58376 out.go:177] * Restarting existing kvm2 VM for "embed-certs-817144" ...
	I0719 15:47:56.582753   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:58.583049   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:59.508289   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:59.508327   59208 machine.go:97] duration metric: took 941.842272ms to provisionDockerMachine
	I0719 15:47:59.508343   59208 start.go:293] postStartSetup for "default-k8s-diff-port-601445" (driver="kvm2")
	I0719 15:47:59.508359   59208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:59.508383   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.508687   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:59.508720   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.511449   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.511887   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.511911   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.512095   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.512275   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.512437   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.512580   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.596683   59208 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:59.600761   59208 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:59.600782   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:59.600841   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:59.600911   59208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:59.600996   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:59.609867   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:59.633767   59208 start.go:296] duration metric: took 125.408568ms for postStartSetup
	I0719 15:47:59.633803   59208 fix.go:56] duration metric: took 20.874627736s for fixHost
	I0719 15:47:59.633825   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.636600   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.636944   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.636977   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.637121   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.637328   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637495   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637640   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.637811   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.637989   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.637999   59208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:59.742929   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404079.728807147
	
	I0719 15:47:59.742957   59208 fix.go:216] guest clock: 1721404079.728807147
	I0719 15:47:59.742967   59208 fix.go:229] Guest: 2024-07-19 15:47:59.728807147 +0000 UTC Remote: 2024-07-19 15:47:59.633807395 +0000 UTC m=+200.280673126 (delta=94.999752ms)
	I0719 15:47:59.743008   59208 fix.go:200] guest clock delta is within tolerance: 94.999752ms
	I0719 15:47:59.743013   59208 start.go:83] releasing machines lock for "default-k8s-diff-port-601445", held for 20.983876369s
	I0719 15:47:59.743040   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.743262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:59.746145   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746501   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.746534   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746662   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747297   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747461   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747553   59208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:59.747603   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.747714   59208 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:59.747738   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.750268   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750583   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750751   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750916   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750932   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.750942   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.751127   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751170   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.751269   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751353   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751421   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.751489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751646   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.834888   59208 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:59.859285   59208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:00.009771   59208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:00.015906   59208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:00.015973   59208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:00.032129   59208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:00.032150   59208 start.go:495] detecting cgroup driver to use...
	I0719 15:48:00.032215   59208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:00.050052   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:00.063282   59208 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:00.063341   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:00.078073   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:00.092872   59208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:00.217105   59208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:00.364335   59208 docker.go:233] disabling docker service ...
	I0719 15:48:00.364403   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:00.384138   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:00.400280   59208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:00.543779   59208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:00.671512   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:00.687337   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:00.708629   59208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:00.708690   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.720508   59208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:00.720580   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.732952   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.743984   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.756129   59208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:00.766873   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.777481   59208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.799865   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.812450   59208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:00.822900   59208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:00.822964   59208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:00.836117   59208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:00.845958   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:00.959002   59208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:01.104519   59208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:01.104598   59208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:01.110652   59208 start.go:563] Will wait 60s for crictl version
	I0719 15:48:01.110711   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:48:01.114358   59208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:01.156969   59208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:01.157063   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.187963   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.219925   59208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.221101   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:48:01.224369   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:01.224789   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224989   59208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:01.229813   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:01.243714   59208 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:01.243843   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:01.243886   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:01.283013   59208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:01.283093   59208 ssh_runner.go:195] Run: which lz4
	I0719 15:48:01.287587   59208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:01.291937   59208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:01.291965   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:02.810751   59208 crio.go:462] duration metric: took 1.52319928s to copy over tarball
	I0719 15:48:02.810846   59208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:59.767270   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Start
	I0719 15:47:59.767433   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring networks are active...
	I0719 15:47:59.768056   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network default is active
	I0719 15:47:59.768371   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network mk-embed-certs-817144 is active
	I0719 15:47:59.768804   58376 main.go:141] libmachine: (embed-certs-817144) Getting domain xml...
	I0719 15:47:59.769396   58376 main.go:141] libmachine: (embed-certs-817144) Creating domain...
	I0719 15:48:01.024457   58376 main.go:141] libmachine: (embed-certs-817144) Waiting to get IP...
	I0719 15:48:01.025252   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.025697   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.025741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.025660   60153 retry.go:31] will retry after 211.260956ms: waiting for machine to come up
	I0719 15:48:01.238027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.238561   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.238588   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.238529   60153 retry.go:31] will retry after 346.855203ms: waiting for machine to come up
	I0719 15:48:01.587201   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.587773   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.587815   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.587736   60153 retry.go:31] will retry after 327.69901ms: waiting for machine to come up
	I0719 15:48:01.917433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.917899   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.917931   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.917864   60153 retry.go:31] will retry after 474.430535ms: waiting for machine to come up
	I0719 15:48:02.393610   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.394139   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.394168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.394061   60153 retry.go:31] will retry after 491.247455ms: waiting for machine to come up
	I0719 15:48:02.886826   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.887296   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.887329   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.887249   60153 retry.go:31] will retry after 661.619586ms: waiting for machine to come up
	I0719 15:48:03.550633   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:03.551175   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:03.551199   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:03.551126   60153 retry.go:31] will retry after 1.10096194s: waiting for machine to come up
	I0719 15:48:00.583866   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:02.585144   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.112520   59208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301644218s)
	I0719 15:48:05.112555   59208 crio.go:469] duration metric: took 2.301774418s to extract the tarball
	I0719 15:48:05.112565   59208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:05.151199   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:05.193673   59208 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:05.193701   59208 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:05.193712   59208 kubeadm.go:934] updating node { 192.168.61.144 8444 v1.30.3 crio true true} ...
	I0719 15:48:05.193836   59208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-601445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:05.193919   59208 ssh_runner.go:195] Run: crio config
	I0719 15:48:05.239103   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:05.239131   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:05.239146   59208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:05.239176   59208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-601445 NodeName:default-k8s-diff-port-601445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:05.239374   59208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-601445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:05.239441   59208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:05.249729   59208 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:05.249799   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:05.259540   59208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 15:48:05.277388   59208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:05.294497   59208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 15:48:05.313990   59208 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:05.318959   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:05.332278   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:05.463771   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:05.480474   59208 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445 for IP: 192.168.61.144
	I0719 15:48:05.480499   59208 certs.go:194] generating shared ca certs ...
	I0719 15:48:05.480520   59208 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:05.480674   59208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:05.480732   59208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:05.480746   59208 certs.go:256] generating profile certs ...
	I0719 15:48:05.480859   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.key
	I0719 15:48:05.480937   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key.e31ea710
	I0719 15:48:05.480992   59208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key
	I0719 15:48:05.481128   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:05.481165   59208 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:05.481180   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:05.481210   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:05.481245   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:05.481276   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:05.481334   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:05.481940   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:05.524604   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:05.562766   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:05.618041   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:05.660224   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 15:48:05.689232   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:05.713890   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:05.738923   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:05.764447   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:05.793905   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:05.823630   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:05.849454   59208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:05.868309   59208 ssh_runner.go:195] Run: openssl version
	I0719 15:48:05.874423   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:05.887310   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.891994   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.892057   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.898173   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:05.911541   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:05.922829   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927537   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927600   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.933642   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:05.946269   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:05.958798   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963899   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963959   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.969801   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:05.980966   59208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:05.985487   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:05.991303   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:05.997143   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:06.003222   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:06.008984   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:06.014939   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:06.020976   59208 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:06.021059   59208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:06.021106   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.066439   59208 cri.go:89] found id: ""
	I0719 15:48:06.066503   59208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:06.080640   59208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:06.080663   59208 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:06.080730   59208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:06.093477   59208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:06.094740   59208 kubeconfig.go:125] found "default-k8s-diff-port-601445" server: "https://192.168.61.144:8444"
	I0719 15:48:06.096907   59208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:06.107974   59208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.144
	I0719 15:48:06.108021   59208 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:06.108035   59208 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:06.108109   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.156149   59208 cri.go:89] found id: ""
	I0719 15:48:06.156222   59208 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:06.172431   59208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:06.182482   59208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:06.182511   59208 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:06.182562   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 15:48:06.192288   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:06.192361   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:06.202613   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 15:48:06.212553   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:06.212624   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:06.223086   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.233949   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:06.234007   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.247224   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 15:48:06.257851   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:06.257908   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:06.268650   59208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:06.279549   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:06.421964   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.407768   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.614213   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.686560   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.769476   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:07.769590   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.270472   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.770366   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.795057   59208 api_server.go:72] duration metric: took 1.025580277s to wait for apiserver process to appear ...
	I0719 15:48:08.795086   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:08.795112   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:08.795617   59208 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0719 15:48:09.295459   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:04.653309   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:04.653784   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:04.653846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:04.653753   60153 retry.go:31] will retry after 1.276153596s: waiting for machine to come up
	I0719 15:48:05.931365   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:05.931820   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:05.931848   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:05.931798   60153 retry.go:31] will retry after 1.372328403s: waiting for machine to come up
	I0719 15:48:07.305390   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:07.305892   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:07.305922   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:07.305850   60153 retry.go:31] will retry after 1.738311105s: waiting for machine to come up
	I0719 15:48:09.046095   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:09.046526   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:09.046558   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:09.046481   60153 retry.go:31] will retry after 2.169449629s: waiting for machine to come up
	I0719 15:48:05.084157   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:07.583246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:09.584584   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:11.457584   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.457651   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.457670   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.490130   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.490165   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.795439   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.803724   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:11.803757   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.295287   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.300002   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:12.300034   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.795285   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.800067   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:48:12.808020   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:12.808045   59208 api_server.go:131] duration metric: took 4.012952016s to wait for apiserver health ...
	I0719 15:48:12.808055   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:12.808064   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:12.810134   59208 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.812011   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:12.824520   59208 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:12.846711   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:12.855286   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:12.855315   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:12.855322   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:12.855329   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:12.855335   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:12.855345   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:12.855353   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:12.855360   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:12.855369   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:12.855377   59208 system_pods.go:74] duration metric: took 8.645314ms to wait for pod list to return data ...
	I0719 15:48:12.855390   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:12.858531   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:12.858556   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:12.858566   59208 node_conditions.go:105] duration metric: took 3.171526ms to run NodePressure ...
	I0719 15:48:12.858581   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:13.176014   59208 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180575   59208 kubeadm.go:739] kubelet initialised
	I0719 15:48:13.180602   59208 kubeadm.go:740] duration metric: took 4.561708ms waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180612   59208 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:13.187723   59208 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.204023   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204052   59208 pod_ready.go:81] duration metric: took 16.303152ms for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.204061   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204070   59208 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.212768   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212790   59208 pod_ready.go:81] duration metric: took 8.709912ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.212800   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212812   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.220452   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220474   59208 pod_ready.go:81] duration metric: took 7.650656ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.220482   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220489   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.251973   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.251997   59208 pod_ready.go:81] duration metric: took 31.499608ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.252008   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.252029   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.650914   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650940   59208 pod_ready.go:81] duration metric: took 398.904724ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.650948   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650954   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.050582   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050615   59208 pod_ready.go:81] duration metric: took 399.652069ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.050630   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050642   59208 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.450349   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450379   59208 pod_ready.go:81] duration metric: took 399.72875ms for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.450391   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450399   59208 pod_ready.go:38] duration metric: took 1.269776818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:14.450416   59208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:14.462296   59208 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:14.462318   59208 kubeadm.go:597] duration metric: took 8.38163922s to restartPrimaryControlPlane
	I0719 15:48:14.462329   59208 kubeadm.go:394] duration metric: took 8.441360513s to StartCluster
	I0719 15:48:14.462348   59208 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.462422   59208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:14.464082   59208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.464400   59208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:14.464459   59208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:14.464531   59208 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464570   59208 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.464581   59208 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:14.464592   59208 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464610   59208 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464636   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:14.464670   59208 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:14.464672   59208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-601445"
	W0719 15:48:14.464684   59208 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:14.464613   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.464740   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.465050   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465111   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465151   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465178   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465235   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.466230   59208 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:11.217150   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:11.217605   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:11.217634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:11.217561   60153 retry.go:31] will retry after 3.406637692s: waiting for machine to come up
	I0719 15:48:14.467899   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:14.481294   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0719 15:48:14.481538   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0719 15:48:14.481541   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0719 15:48:14.481658   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.482122   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482145   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482363   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482387   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482461   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482478   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482590   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482704   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482762   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482853   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.483131   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483159   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.483199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483217   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.486437   59208 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.486462   59208 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:14.486492   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.486893   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.486932   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.498388   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0719 15:48:14.498897   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0719 15:48:14.498952   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499251   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499660   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499678   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.499838   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499853   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.500068   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500168   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500232   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.500410   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.501505   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0719 15:48:14.501876   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.502391   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.502413   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.502456   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.502745   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.503006   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.503314   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.503341   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.505162   59208 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:14.505166   59208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:12.084791   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.582986   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.506465   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:14.506487   59208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:14.506506   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.506585   59208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.506604   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:14.506628   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.510227   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511092   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511134   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511207   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511231   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511257   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511370   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511390   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511570   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511574   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511662   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.511713   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511787   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511840   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.520612   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0719 15:48:14.521013   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.521451   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.521470   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.521817   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.522016   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.523622   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.523862   59208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.523876   59208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:14.523895   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.526426   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.526882   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.526941   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.527060   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.527190   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.527344   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.527439   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.674585   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:14.693700   59208 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:14.752990   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.856330   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:14.856350   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:14.884762   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:14.884784   59208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:14.895548   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.915815   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:14.915844   59208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:14.979442   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:15.098490   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098517   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098869   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.098893   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.098902   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.099141   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.099158   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.105078   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.105252   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.105506   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.105526   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.802868   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.802892   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803265   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803279   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.803285   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.803517   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803530   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803577   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.905945   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.905972   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906244   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906266   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906266   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.906275   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.906283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906484   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906496   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906511   59208 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:15.908671   59208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.910057   59208 addons.go:510] duration metric: took 1.445597408s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 15:48:16.697266   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:18.698379   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.627319   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:14.627800   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:14.627822   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:14.627767   60153 retry.go:31] will retry after 4.38444645s: waiting for machine to come up
	I0719 15:48:19.016073   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016711   58376 main.go:141] libmachine: (embed-certs-817144) Found IP for machine: 192.168.72.37
	I0719 15:48:19.016742   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has current primary IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016749   58376 main.go:141] libmachine: (embed-certs-817144) Reserving static IP address...
	I0719 15:48:19.017180   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.017204   58376 main.go:141] libmachine: (embed-certs-817144) Reserved static IP address: 192.168.72.37
	I0719 15:48:19.017222   58376 main.go:141] libmachine: (embed-certs-817144) DBG | skip adding static IP to network mk-embed-certs-817144 - found existing host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"}
	I0719 15:48:19.017239   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Getting to WaitForSSH function...
	I0719 15:48:19.017254   58376 main.go:141] libmachine: (embed-certs-817144) Waiting for SSH to be available...
	I0719 15:48:19.019511   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.019867   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.019896   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.020064   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH client type: external
	I0719 15:48:19.020080   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa (-rw-------)
	I0719 15:48:19.020107   58376 main.go:141] libmachine: (embed-certs-817144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:48:19.020115   58376 main.go:141] libmachine: (embed-certs-817144) DBG | About to run SSH command:
	I0719 15:48:19.020124   58376 main.go:141] libmachine: (embed-certs-817144) DBG | exit 0
	I0719 15:48:19.150328   58376 main.go:141] libmachine: (embed-certs-817144) DBG | SSH cmd err, output: <nil>: 
	I0719 15:48:19.150676   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetConfigRaw
	I0719 15:48:19.151317   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.154087   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154600   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.154634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154907   58376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:48:19.155143   58376 machine.go:94] provisionDockerMachine start ...
	I0719 15:48:19.155168   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:19.155369   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.157741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.158060   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158175   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.158368   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158618   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158769   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.158945   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.159144   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.159161   58376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:48:19.274836   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:48:19.274863   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275148   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:48:19.275174   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275373   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.278103   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278489   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.278518   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.278892   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279111   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279299   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.279577   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.279798   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.279815   58376 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-817144 && echo "embed-certs-817144" | sudo tee /etc/hostname
	I0719 15:48:19.413956   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-817144
	
	I0719 15:48:19.413988   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.416836   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.417196   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417408   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.417599   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417777   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417911   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.418083   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.418274   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.418290   58376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-817144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-817144/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-817144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:48:16.583538   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.083431   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.541400   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:48:19.541439   58376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:48:19.541464   58376 buildroot.go:174] setting up certificates
	I0719 15:48:19.541478   58376 provision.go:84] configureAuth start
	I0719 15:48:19.541495   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.541801   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.544209   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544579   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.544608   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544766   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.547206   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.547570   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547714   58376 provision.go:143] copyHostCerts
	I0719 15:48:19.547772   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:48:19.547782   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:48:19.547827   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:48:19.547939   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:48:19.547949   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:48:19.547969   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:48:19.548024   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:48:19.548031   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:48:19.548047   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:48:19.548093   58376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.embed-certs-817144 san=[127.0.0.1 192.168.72.37 embed-certs-817144 localhost minikube]
	I0719 15:48:20.024082   58376 provision.go:177] copyRemoteCerts
	I0719 15:48:20.024137   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:48:20.024157   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.026940   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027322   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.027358   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027541   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.027819   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.028011   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.028165   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.117563   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:48:20.144428   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:48:20.171520   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:48:20.195188   58376 provision.go:87] duration metric: took 653.6924ms to configureAuth
	I0719 15:48:20.195215   58376 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:48:20.195432   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:20.195518   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.198648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.198970   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.199007   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.199126   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.199335   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199527   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199687   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.199849   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.200046   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.200063   58376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:48:20.502753   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:48:20.502782   58376 machine.go:97] duration metric: took 1.347623735s to provisionDockerMachine
	I0719 15:48:20.502794   58376 start.go:293] postStartSetup for "embed-certs-817144" (driver="kvm2")
	I0719 15:48:20.502805   58376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:48:20.502821   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.503204   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:48:20.503248   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.506142   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.506563   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506697   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.506938   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.507125   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.507258   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.593356   58376 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:48:20.597843   58376 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:48:20.597877   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:48:20.597948   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:48:20.598048   58376 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:48:20.598164   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:48:20.607951   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:20.634860   58376 start.go:296] duration metric: took 132.043928ms for postStartSetup
	I0719 15:48:20.634900   58376 fix.go:56] duration metric: took 20.891722874s for fixHost
	I0719 15:48:20.634919   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.637846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638181   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.638218   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638439   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.638674   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.638884   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.639054   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.639256   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.639432   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.639444   58376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:48:20.755076   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404100.730818472
	
	I0719 15:48:20.755107   58376 fix.go:216] guest clock: 1721404100.730818472
	I0719 15:48:20.755115   58376 fix.go:229] Guest: 2024-07-19 15:48:20.730818472 +0000 UTC Remote: 2024-07-19 15:48:20.634903926 +0000 UTC m=+356.193225446 (delta=95.914546ms)
	I0719 15:48:20.755134   58376 fix.go:200] guest clock delta is within tolerance: 95.914546ms
	I0719 15:48:20.755139   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 21.011996674s
	I0719 15:48:20.755171   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.755465   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:20.758255   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758621   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.758644   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758861   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759348   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759545   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759656   58376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:48:20.759720   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.759780   58376 ssh_runner.go:195] Run: cat /version.json
	I0719 15:48:20.759802   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.762704   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.762833   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763161   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763202   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763399   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763493   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763545   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763608   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763693   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763772   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764001   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763996   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.764156   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764278   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.867430   58376 ssh_runner.go:195] Run: systemctl --version
	I0719 15:48:20.873463   58376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:21.029369   58376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:21.035953   58376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:21.036028   58376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:21.054352   58376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:21.054381   58376 start.go:495] detecting cgroup driver to use...
	I0719 15:48:21.054440   58376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:21.071903   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:21.088624   58376 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:21.088688   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:21.104322   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:21.120089   58376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:21.242310   58376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:21.422514   58376 docker.go:233] disabling docker service ...
	I0719 15:48:21.422589   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:21.439213   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:21.454361   58376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:21.577118   58376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:21.704150   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:21.719160   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:21.738765   58376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:21.738817   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.750720   58376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:21.750798   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.763190   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.775630   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.787727   58376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:21.799520   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.812016   58376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.830564   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.841770   58376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:21.851579   58376 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:21.851651   58376 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:21.864529   58376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:21.874301   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:21.994669   58376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:22.131448   58376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:22.131521   58376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:22.137328   58376 start.go:563] Will wait 60s for crictl version
	I0719 15:48:22.137391   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:48:22.141409   58376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:22.182947   58376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:22.183029   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.217804   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.252450   58376 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.197350   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:22.197536   59208 node_ready.go:49] node "default-k8s-diff-port-601445" has status "Ready":"True"
	I0719 15:48:22.197558   59208 node_ready.go:38] duration metric: took 7.503825721s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:22.197568   59208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:22.203380   59208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:24.211899   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:22.253862   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:22.256397   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256763   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:22.256791   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256968   58376 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:22.261184   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:22.274804   58376 kubeadm.go:883] updating cluster {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:22.274936   58376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:22.274994   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:22.317501   58376 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:22.317559   58376 ssh_runner.go:195] Run: which lz4
	I0719 15:48:22.321646   58376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:22.326455   58376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:22.326478   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:23.820083   58376 crio.go:462] duration metric: took 1.498469232s to copy over tarball
	I0719 15:48:23.820155   58376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:48:21.583230   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.585191   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.710838   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.786269   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:26.105248   58376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285062307s)
	I0719 15:48:26.105271   58376 crio.go:469] duration metric: took 2.285164513s to extract the tarball
	I0719 15:48:26.105279   58376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:26.142811   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:26.185631   58376 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:26.185660   58376 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:26.185668   58376 kubeadm.go:934] updating node { 192.168.72.37 8443 v1.30.3 crio true true} ...
	I0719 15:48:26.185784   58376 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-817144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:26.185857   58376 ssh_runner.go:195] Run: crio config
	I0719 15:48:26.238150   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:26.238172   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:26.238183   58376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:26.238211   58376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-817144 NodeName:embed-certs-817144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:26.238449   58376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-817144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:26.238515   58376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:26.249200   58376 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:26.249278   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:26.258710   58376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 15:48:26.279235   58376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:26.299469   58376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 15:48:26.317789   58376 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:26.321564   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:26.333153   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:26.452270   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:26.469344   58376 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144 for IP: 192.168.72.37
	I0719 15:48:26.469366   58376 certs.go:194] generating shared ca certs ...
	I0719 15:48:26.469382   58376 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:26.469530   58376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:26.469586   58376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:26.469601   58376 certs.go:256] generating profile certs ...
	I0719 15:48:26.469694   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/client.key
	I0719 15:48:26.469791   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key.928d4c24
	I0719 15:48:26.469846   58376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key
	I0719 15:48:26.469982   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:26.470021   58376 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:26.470035   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:26.470071   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:26.470105   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:26.470140   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:26.470197   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:26.470812   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:26.508455   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:26.537333   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:26.565167   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:26.601152   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 15:48:26.636408   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:26.669076   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:26.695438   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:26.718897   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:26.741760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:26.764760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:26.787772   58376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:26.807332   58376 ssh_runner.go:195] Run: openssl version
	I0719 15:48:26.815182   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:26.827373   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831926   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831973   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.837923   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:26.849158   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:26.860466   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865178   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865249   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.870873   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:26.882044   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:26.893283   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897750   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897809   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.903395   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:26.914389   58376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:26.918904   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:26.924659   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:26.930521   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:26.936808   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:26.942548   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:26.948139   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:26.954557   58376 kubeadm.go:392] StartCluster: {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:26.954644   58376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:26.954722   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:26.994129   58376 cri.go:89] found id: ""
	I0719 15:48:26.994205   58376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:27.006601   58376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:27.006624   58376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:27.006699   58376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:27.017166   58376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:27.018580   58376 kubeconfig.go:125] found "embed-certs-817144" server: "https://192.168.72.37:8443"
	I0719 15:48:27.021622   58376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:27.033000   58376 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.37
	I0719 15:48:27.033033   58376 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:27.033044   58376 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:27.033083   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:27.073611   58376 cri.go:89] found id: ""
	I0719 15:48:27.073678   58376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:27.092986   58376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:27.103557   58376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:27.103580   58376 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:27.103636   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:48:27.113687   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:27.113752   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:27.123696   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:48:27.132928   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:27.132984   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:27.142566   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.152286   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:27.152335   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.161701   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:48:27.171532   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:27.171591   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:27.181229   58376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:27.192232   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:27.330656   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.287561   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.513476   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.616308   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.704518   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:28.704605   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.205265   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.082992   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.746255   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.704706   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.204728   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.221741   58376 api_server.go:72] duration metric: took 1.517220815s to wait for apiserver process to appear ...
	I0719 15:48:30.221766   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:30.221786   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.665104   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.665138   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.665152   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.703238   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.703271   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.722495   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.748303   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:32.748344   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.222861   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.227076   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.227104   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.722705   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.734658   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.734683   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:34.222279   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:34.227870   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:48:34.233621   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:34.233646   58376 api_server.go:131] duration metric: took 4.011873202s to wait for apiserver health ...
	I0719 15:48:34.233656   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:34.233664   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:34.235220   58376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:30.210533   59208 pod_ready.go:92] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.210557   59208 pod_ready.go:81] duration metric: took 8.007151724s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.210568   59208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215669   59208 pod_ready.go:92] pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.215692   59208 pod_ready.go:81] duration metric: took 5.116005ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215702   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222633   59208 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.222655   59208 pod_ready.go:81] duration metric: took 6.947228ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222664   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227631   59208 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.227656   59208 pod_ready.go:81] duration metric: took 4.985227ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227667   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405047   59208 pod_ready.go:92] pod "kube-proxy-r7b2z" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.405073   59208 pod_ready.go:81] duration metric: took 177.397954ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405085   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805843   59208 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.805877   59208 pod_ready.go:81] duration metric: took 400.783803ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805890   59208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:32.821231   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.236303   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:34.248133   58376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:34.270683   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:34.279907   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:34.279939   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:34.279946   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:34.279953   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:34.279960   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:34.279966   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:34.279972   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:34.279977   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:34.279982   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:34.279988   58376 system_pods.go:74] duration metric: took 9.282886ms to wait for pod list to return data ...
	I0719 15:48:34.279995   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:34.283597   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:34.283623   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:34.283634   58376 node_conditions.go:105] duration metric: took 3.634999ms to run NodePressure ...
	I0719 15:48:34.283649   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:31.082803   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:33.583510   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.586116   58376 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590095   58376 kubeadm.go:739] kubelet initialised
	I0719 15:48:34.590119   58376 kubeadm.go:740] duration metric: took 3.977479ms waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590128   58376 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:34.594987   58376 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.600192   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600212   58376 pod_ready.go:81] duration metric: took 5.205124ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.600220   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600225   58376 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.603934   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603952   58376 pod_ready.go:81] duration metric: took 3.719853ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.603959   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603965   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.607778   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607803   58376 pod_ready.go:81] duration metric: took 3.830174ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.607817   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607826   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.673753   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673775   58376 pod_ready.go:81] duration metric: took 65.937586ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.673783   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673788   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.075506   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075539   58376 pod_ready.go:81] duration metric: took 401.743578ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.075548   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075554   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.474518   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474546   58376 pod_ready.go:81] duration metric: took 398.985628ms for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.474558   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474567   58376 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.874540   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874567   58376 pod_ready.go:81] duration metric: took 399.989978ms for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.874576   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874582   58376 pod_ready.go:38] duration metric: took 1.284443879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:35.874646   58376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:35.886727   58376 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:35.886751   58376 kubeadm.go:597] duration metric: took 8.880120513s to restartPrimaryControlPlane
	I0719 15:48:35.886760   58376 kubeadm.go:394] duration metric: took 8.932210528s to StartCluster
	I0719 15:48:35.886781   58376 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.886859   58376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:35.888389   58376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.888642   58376 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:35.888722   58376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:35.888781   58376 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-817144"
	I0719 15:48:35.888810   58376 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-817144"
	I0719 15:48:35.888824   58376 addons.go:69] Setting default-storageclass=true in profile "embed-certs-817144"
	I0719 15:48:35.888839   58376 addons.go:69] Setting metrics-server=true in profile "embed-certs-817144"
	I0719 15:48:35.888875   58376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-817144"
	I0719 15:48:35.888888   58376 addons.go:234] Setting addon metrics-server=true in "embed-certs-817144"
	W0719 15:48:35.888897   58376 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:35.888931   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.888840   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0719 15:48:35.888843   58376 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:35.889000   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.889231   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889242   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889247   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889270   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889272   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889282   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.890641   58376 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:35.892144   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:35.905134   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0719 15:48:35.905572   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.905788   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0719 15:48:35.906107   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906132   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.906171   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.906496   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.906825   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906846   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.907126   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.907179   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.907215   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.907289   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.908269   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0719 15:48:35.908747   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.909343   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.909367   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.909787   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.910337   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910382   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.910615   58376 addons.go:234] Setting addon default-storageclass=true in "embed-certs-817144"
	W0719 15:48:35.910632   58376 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:35.910662   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.910937   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910965   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.926165   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 15:48:35.926905   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.926944   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0719 15:48:35.927369   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.927573   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927636   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927829   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927847   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927959   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928512   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.928551   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.928759   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928824   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 15:48:35.928964   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.929176   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.929546   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.929557   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.929927   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.930278   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.931161   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.931773   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.933234   58376 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:35.933298   58376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:35.934543   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:35.934556   58376 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:35.934569   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.934629   58376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:35.934642   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:35.934657   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.938300   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938628   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.938648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938679   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939150   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939340   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.939433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.939479   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939536   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.939619   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939673   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.939937   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.940081   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.940190   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.947955   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0719 15:48:35.948206   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.948643   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.948654   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.948961   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.949119   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.950572   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.951770   58376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:35.951779   58376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:35.951791   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.957009   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957381   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.957405   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957550   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.957717   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.957841   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.957953   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:36.072337   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:36.091547   58376 node_ready.go:35] waiting up to 6m0s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:36.182328   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:36.195704   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:36.195729   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:36.221099   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:36.224606   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:36.224632   58376 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:36.247264   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:36.247289   58376 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:36.300365   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:37.231670   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010526005s)
	I0719 15:48:37.231729   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231743   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.231765   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049406285s)
	I0719 15:48:37.231807   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231822   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232034   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232085   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232096   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.232100   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232105   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.232115   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232345   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232366   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233486   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233529   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233541   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.233549   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.233792   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233815   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233832   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.240487   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.240502   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.240732   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.240754   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.240755   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288064   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288085   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288370   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288389   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288378   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288400   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288406   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288595   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288606   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288652   58376 addons.go:475] Verifying addon metrics-server=true in "embed-certs-817144"
	I0719 15:48:37.290497   58376 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.314792   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.814653   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.291961   58376 addons.go:510] duration metric: took 1.403238435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:48:38.096793   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.584345   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.585215   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.818959   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.313745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:44.314213   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:40.596246   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.095976   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.595640   58376 node_ready.go:49] node "embed-certs-817144" has status "Ready":"True"
	I0719 15:48:43.595659   58376 node_ready.go:38] duration metric: took 7.504089345s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:43.595667   58376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:43.600832   58376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605878   58376 pod_ready.go:92] pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.605900   58376 pod_ready.go:81] duration metric: took 5.046391ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605912   58376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610759   58376 pod_ready.go:92] pod "etcd-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.610778   58376 pod_ready.go:81] duration metric: took 4.85915ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610788   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615239   58376 pod_ready.go:92] pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.615257   58376 pod_ready.go:81] duration metric: took 4.46126ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615267   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619789   58376 pod_ready.go:92] pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.619804   58376 pod_ready.go:81] duration metric: took 4.530085ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619814   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998585   58376 pod_ready.go:92] pod "kube-proxy-4d4g9" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.998612   58376 pod_ready.go:81] duration metric: took 378.78761ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998622   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:40.084033   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.582983   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.812904   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.313178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:46.004415   58376 pod_ready.go:102] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.006304   58376 pod_ready.go:92] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:48.006329   58376 pod_ready.go:81] duration metric: took 4.00769937s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:48.006339   58376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:45.082973   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:47.582224   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.582782   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:51.814049   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:53.815503   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:50.015637   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:52.515491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:51.583726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.083179   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.816000   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.817771   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:55.014213   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.014730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:56.083381   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.088572   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:00.313552   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:02.812079   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:59.513087   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:01.514094   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.013514   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:00.583159   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:03.082968   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:05.312525   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.812891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:06.013654   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:08.015552   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:05.083931   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.583371   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:09.824389   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.312960   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.512671   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.513359   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.082891   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:14.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:14.813090   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.311701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:15.014386   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.513993   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:16.584566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.082569   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:19.814129   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.814762   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:23.817102   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:20.012767   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:22.512467   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.587074   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:24.082829   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:26.312496   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.312687   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.015437   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:27.514515   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:26.084854   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.584103   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:30.313153   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:32.812075   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:29.514963   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.515163   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.014174   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.083793   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:33.083838   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:34.812542   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.311929   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.312244   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:36.513892   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.013261   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:35.084098   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.587696   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:41.313207   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.815916   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:41.013495   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.513445   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:40.082726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:42.583599   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.584503   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:46.313534   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.811536   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:46.012299   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.515396   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:47.082848   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:49.083291   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.813781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:52.817124   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.516602   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.012716   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:51.083390   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.583030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:55.312032   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.813778   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:55.013719   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.014070   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:56.083506   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:58.582593   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:59.815894   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.312541   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.513158   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.013500   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:00.583268   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:03.082967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:04.814326   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.314104   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:04.513144   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.013900   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.014269   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.582967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.583076   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.583550   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:09.813831   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.815120   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.815551   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.512872   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.514351   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.584717   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.082745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.815701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:17.816052   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.012834   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.014504   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.582156   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.583011   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:20.312912   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:22.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:20.513572   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.014103   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:21.082689   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.583483   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:25.312127   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.312599   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.512955   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.515102   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.583597   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:28.083843   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:29.815683   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.312009   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.312309   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.013332   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.013381   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.082937   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:36.812745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.312184   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.513321   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:36.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.012035   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:35.084310   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:37.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:41.313263   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.816257   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:41.014458   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.017012   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:40.083591   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:42.582246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:44.582857   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:46.312320   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.312805   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.512849   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.013822   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:46.582906   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.583537   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:50.815488   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.312626   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:50.013996   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:52.514493   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:51.082358   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.582566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:50:55.814460   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.313739   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:55.014039   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:57.513248   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:56.082876   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.583172   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:00.812445   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.813629   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.011751   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.013062   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.013473   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.584028   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:03.082149   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:05.312865   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.816945   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:06.513634   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.012283   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:05.084185   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.583429   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.583944   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:10.315941   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:12.812732   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.013749   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.513338   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.584335   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:14.083745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:15.311404   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:17.312317   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.013193   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:18.014317   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.583403   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.082807   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.812659   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.813178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.311781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:20.512610   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:22.512707   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.083030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:23.583501   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:26.312416   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.313406   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.513171   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:27.012377   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:29.014890   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.583785   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.083633   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:30.811822   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.813013   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:31.512155   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.012636   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:30.083916   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.582845   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.582945   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.313638   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:37.813400   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.013415   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.513387   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.583140   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:39.084770   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:40.312909   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:42.812703   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.011956   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:43.513117   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.584336   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.082447   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.813328   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:47.318119   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.013597   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.513037   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.083435   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.582222   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:51:49.811847   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:51.812747   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.312028   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.514497   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:53.012564   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.585244   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:52.587963   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.576923   58417 pod_ready.go:81] duration metric: took 4m0.000887015s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	E0719 15:51:54.576954   58417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 15:51:54.576979   58417 pod_ready.go:38] duration metric: took 4m10.045017696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:51:54.577013   58417 kubeadm.go:597] duration metric: took 4m18.572474217s to restartPrimaryControlPlane
	W0719 15:51:54.577075   58417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:54.577107   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:56.314112   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:58.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:55.012915   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:57.512491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:01.312620   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:03.812880   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:59.512666   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:02.013784   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.314545   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:08.811891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:04.512583   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:09.016808   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:10.813197   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:13.313167   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:11.513329   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:14.012352   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:15.812105   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:17.812843   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:16.014362   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:18.513873   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:20.685347   58417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.108209289s)
	I0719 15:52:20.685431   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:20.699962   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:20.709728   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:20.719022   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:20.719038   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:52:20.719074   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:52:20.727669   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:52:20.727731   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:52:20.736851   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:52:20.745821   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:52:20.745867   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:52:20.755440   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.764307   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:52:20.764360   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.773759   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:52:20.782354   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:52:20.782420   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:52:20.791186   58417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:20.837700   58417 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 15:52:20.837797   58417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:52:20.958336   58417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:20.958486   58417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:20.958629   58417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 15:52:20.967904   58417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:20.969995   58417 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:20.970097   58417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:52:20.970197   58417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:20.970325   58417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:52:20.970438   58417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:52:20.970550   58417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:52:20.970633   58417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:52:20.970740   58417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:52:20.970840   58417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:52:20.970949   58417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:52:20.971049   58417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:52:20.971106   58417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:52:20.971184   58417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:21.175226   58417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:21.355994   58417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 15:52:21.453237   58417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:21.569014   58417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:21.672565   58417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:21.673036   58417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:21.675860   58417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:20.312428   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:22.312770   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:24.314183   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.013099   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:23.512341   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.677594   58417 out.go:204]   - Booting up control plane ...
	I0719 15:52:21.677694   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:21.677787   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:21.677894   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:21.695474   58417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:21.701352   58417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:21.701419   58417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:52:21.831941   58417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 15:52:21.832046   58417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 15:52:22.333073   58417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.399393ms
	I0719 15:52:22.333184   58417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 15:52:27.336964   58417 kubeadm.go:310] [api-check] The API server is healthy after 5.002306078s
	I0719 15:52:27.348152   58417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:27.366916   58417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:27.396214   58417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:27.396475   58417 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-382231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:27.408607   58417 kubeadm.go:310] [bootstrap-token] Using token: xdoy2n.29347ekmgral9ki3
	I0719 15:52:27.409857   58417 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:27.409991   58417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:27.415553   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:27.424772   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:27.428421   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:27.439922   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:27.443985   58417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:27.742805   58417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:28.253742   58417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 15:52:28.744380   58417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 15:52:28.744405   58417 kubeadm.go:310] 
	I0719 15:52:28.744486   58417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:28.744498   58417 kubeadm.go:310] 
	I0719 15:52:28.744581   58417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:28.744588   58417 kubeadm.go:310] 
	I0719 15:52:28.744633   58417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 15:52:28.744704   58417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:28.744783   58417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:28.744794   58417 kubeadm.go:310] 
	I0719 15:52:28.744877   58417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 15:52:28.744891   58417 kubeadm.go:310] 
	I0719 15:52:28.744944   58417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:28.744951   58417 kubeadm.go:310] 
	I0719 15:52:28.744992   58417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 15:52:28.745082   58417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:28.745172   58417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:28.745181   58417 kubeadm.go:310] 
	I0719 15:52:28.745253   58417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:28.745319   58417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 15:52:28.745332   58417 kubeadm.go:310] 
	I0719 15:52:28.745412   58417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745499   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 15:52:28.745518   58417 kubeadm.go:310] 	--control-plane 
	I0719 15:52:28.745525   58417 kubeadm.go:310] 
	I0719 15:52:28.745599   58417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:28.745609   58417 kubeadm.go:310] 
	I0719 15:52:28.745677   58417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745778   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 15:52:28.747435   58417 kubeadm.go:310] W0719 15:52:20.814208    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747697   58417 kubeadm.go:310] W0719 15:52:20.814905    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747795   58417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:28.747815   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:52:28.747827   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:52:28.749619   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:26.813409   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.814040   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:25.513048   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:27.514730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.750992   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:28.762976   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:52:28.783894   58417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:28.783972   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.783989   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-382231 minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=no-preload-382231 minikube.k8s.io/primary=true
	I0719 15:52:28.808368   58417 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:29.005658   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.505702   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.005765   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.505834   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.005837   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.506329   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.006419   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.505701   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.005735   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.130121   58417 kubeadm.go:1113] duration metric: took 4.346215264s to wait for elevateKubeSystemPrivileges
	I0719 15:52:33.130162   58417 kubeadm.go:394] duration metric: took 4m57.173876302s to StartCluster
	I0719 15:52:33.130187   58417 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.130290   58417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:52:33.131944   58417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.132178   58417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:52:33.132237   58417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:52:33.132339   58417 addons.go:69] Setting storage-provisioner=true in profile "no-preload-382231"
	I0719 15:52:33.132358   58417 addons.go:69] Setting default-storageclass=true in profile "no-preload-382231"
	I0719 15:52:33.132381   58417 addons.go:234] Setting addon storage-provisioner=true in "no-preload-382231"
	I0719 15:52:33.132385   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0719 15:52:33.132391   58417 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:52:33.132392   58417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-382231"
	I0719 15:52:33.132419   58417 addons.go:69] Setting metrics-server=true in profile "no-preload-382231"
	I0719 15:52:33.132423   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132444   58417 addons.go:234] Setting addon metrics-server=true in "no-preload-382231"
	W0719 15:52:33.132452   58417 addons.go:243] addon metrics-server should already be in state true
	I0719 15:52:33.132474   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132740   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132763   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132799   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132810   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132822   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132829   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.134856   58417 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:33.136220   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:33.149028   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0719 15:52:33.149128   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0719 15:52:33.149538   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.149646   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.150093   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150108   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150111   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150119   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150477   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150603   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150955   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.150971   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.151326   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 15:52:33.151359   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.151715   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.152199   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.152223   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.152574   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.153136   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.153170   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.155187   58417 addons.go:234] Setting addon default-storageclass=true in "no-preload-382231"
	W0719 15:52:33.155207   58417 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:52:33.155235   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.155572   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.155602   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.170886   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0719 15:52:33.170884   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 15:52:33.171439   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.171510   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0719 15:52:33.171543   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172005   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172026   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172109   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172141   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172162   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172538   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172552   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172609   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172775   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.172831   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172875   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.173021   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.173381   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.173405   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.175118   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.175500   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.177023   58417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:52:33.177041   58417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:33.178348   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:33.178362   58417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:33.178377   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.178450   58417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.178469   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:52:33.178486   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.182287   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182598   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.182617   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182741   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.182948   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.183074   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.183204   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.183372   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183940   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.183959   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183994   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.184237   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.184356   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.184505   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.191628   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 15:52:33.191984   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.192366   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.192385   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.192707   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.192866   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.194285   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.194485   58417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.194499   58417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:33.194514   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.197526   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.197853   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.197872   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.198087   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.198335   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.198472   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.198604   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.382687   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:52:33.403225   58417 node_ready.go:35] waiting up to 6m0s for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430507   58417 node_ready.go:49] node "no-preload-382231" has status "Ready":"True"
	I0719 15:52:33.430535   58417 node_ready.go:38] duration metric: took 27.282654ms for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430546   58417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:33.482352   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.555210   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.565855   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:33.565874   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:52:33.571653   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.609541   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:33.609569   58417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:33.674428   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:33.674455   58417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:33.746703   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:34.092029   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092051   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092341   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092359   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.092369   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092379   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092604   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092628   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.092634   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.093766   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.093785   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094025   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094043   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094076   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.094088   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094325   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094343   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094349   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128393   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.128412   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.128715   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128766   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.128775   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.319737   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.319764   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320141   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320161   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320165   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.320184   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.320199   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320441   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320462   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320475   58417 addons.go:475] Verifying addon metrics-server=true in "no-preload-382231"
	I0719 15:52:34.320482   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.322137   58417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:52:30.812091   59208 pod_ready.go:81] duration metric: took 4m0.006187238s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:30.812113   59208 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:30.812120   59208 pod_ready.go:38] duration metric: took 4m8.614544303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.812135   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:30.812161   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:30.812208   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:30.861054   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:30.861074   59208 cri.go:89] found id: ""
	I0719 15:52:30.861083   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:30.861144   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.865653   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:30.865708   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:30.900435   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:30.900459   59208 cri.go:89] found id: ""
	I0719 15:52:30.900468   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:30.900512   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.904686   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:30.904747   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:30.950618   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.950638   59208 cri.go:89] found id: ""
	I0719 15:52:30.950646   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:30.950691   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.955080   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:30.955147   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:30.996665   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:30.996691   59208 cri.go:89] found id: ""
	I0719 15:52:30.996704   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:30.996778   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.001122   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:31.001191   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:31.042946   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.042969   59208 cri.go:89] found id: ""
	I0719 15:52:31.042979   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:31.043039   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.047311   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:31.047365   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:31.086140   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.086166   59208 cri.go:89] found id: ""
	I0719 15:52:31.086175   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:31.086230   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.091742   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:31.091818   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:31.134209   59208 cri.go:89] found id: ""
	I0719 15:52:31.134241   59208 logs.go:276] 0 containers: []
	W0719 15:52:31.134252   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:31.134260   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:31.134316   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:31.173297   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.173325   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.173331   59208 cri.go:89] found id: ""
	I0719 15:52:31.173353   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:31.173414   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.177951   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.182099   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:31.182121   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:31.196541   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:31.196565   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:31.322528   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:31.322555   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:31.369628   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:31.369658   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:31.417834   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:31.417867   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:31.459116   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:31.459145   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.500986   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:31.501018   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.578557   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:31.578606   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:31.635053   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:31.635082   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:31.692604   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:31.692635   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.729765   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:31.729801   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.766152   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:31.766177   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:32.301240   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:32.301278   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.013083   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:32.013142   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:34.323358   58417 addons.go:510] duration metric: took 1.19112329s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:34.849019   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:34.866751   59208 api_server.go:72] duration metric: took 4m20.402312557s to wait for apiserver process to appear ...
	I0719 15:52:34.866779   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:34.866816   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:34.866876   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:34.905505   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.905532   59208 cri.go:89] found id: ""
	I0719 15:52:34.905542   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:34.905609   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.910996   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:34.911069   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:34.958076   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:34.958100   59208 cri.go:89] found id: ""
	I0719 15:52:34.958110   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:34.958166   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.962439   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:34.962507   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:34.999095   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:34.999117   59208 cri.go:89] found id: ""
	I0719 15:52:34.999126   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:34.999178   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.003785   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:35.003848   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:35.042585   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.042613   59208 cri.go:89] found id: ""
	I0719 15:52:35.042622   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:35.042683   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.048705   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:35.048770   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:35.092408   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.092435   59208 cri.go:89] found id: ""
	I0719 15:52:35.092444   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:35.092499   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.096983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:35.097050   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:35.135694   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.135717   59208 cri.go:89] found id: ""
	I0719 15:52:35.135726   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:35.135782   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.140145   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:35.140223   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:35.178912   59208 cri.go:89] found id: ""
	I0719 15:52:35.178938   59208 logs.go:276] 0 containers: []
	W0719 15:52:35.178948   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:35.178955   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:35.179015   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:35.229067   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.229090   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.229104   59208 cri.go:89] found id: ""
	I0719 15:52:35.229112   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:35.229172   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.234985   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.240098   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:35.240120   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:35.299418   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:35.299449   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:35.316294   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:35.316330   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:35.433573   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:35.433610   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:35.479149   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:35.479181   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:35.526270   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:35.526299   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.564209   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:35.564241   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.601985   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:35.602020   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.669986   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:35.670015   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.711544   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:35.711580   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:35.763800   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:35.763831   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:35.822699   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:35.822732   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.863377   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:35.863422   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:38.777749   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:52:38.781984   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:52:38.782935   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:38.782955   59208 api_server.go:131] duration metric: took 3.916169938s to wait for apiserver health ...
	I0719 15:52:38.782963   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:38.782983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:38.783026   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:38.818364   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:38.818387   59208 cri.go:89] found id: ""
	I0719 15:52:38.818395   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:38.818442   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.823001   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:38.823054   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:38.857871   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:38.857900   59208 cri.go:89] found id: ""
	I0719 15:52:38.857909   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:38.857958   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.864314   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:38.864375   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:38.910404   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:38.910434   59208 cri.go:89] found id: ""
	I0719 15:52:38.910445   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:38.910505   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.915588   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:38.915645   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:38.952981   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:38.953002   59208 cri.go:89] found id: ""
	I0719 15:52:38.953009   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:38.953055   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.957397   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:38.957447   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:39.002973   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.003001   59208 cri.go:89] found id: ""
	I0719 15:52:39.003011   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:39.003059   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.007496   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:39.007568   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:39.045257   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.045282   59208 cri.go:89] found id: ""
	I0719 15:52:39.045291   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:39.045351   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.049358   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:39.049415   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:39.083263   59208 cri.go:89] found id: ""
	I0719 15:52:39.083303   59208 logs.go:276] 0 containers: []
	W0719 15:52:39.083314   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:39.083321   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:39.083391   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:39.121305   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.121348   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.121354   59208 cri.go:89] found id: ""
	I0719 15:52:39.121363   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:39.121421   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.126259   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.130395   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:39.130413   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:39.171213   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:39.171239   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.206545   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:39.206577   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:39.267068   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:39.267105   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:39.373510   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:39.373544   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.512374   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.012559   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:39.013766   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:35.495479   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.989424   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:38.489746   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.489775   58417 pod_ready.go:81] duration metric: took 5.007393051s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.489790   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495855   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.495884   58417 pod_ready.go:81] duration metric: took 6.085398ms for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495895   58417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:40.502651   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:41.503286   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.503309   58417 pod_ready.go:81] duration metric: took 3.007406201s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.503321   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513225   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.513245   58417 pod_ready.go:81] duration metric: took 9.916405ms for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513256   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517651   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.517668   58417 pod_ready.go:81] duration metric: took 4.40518ms for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517677   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522529   58417 pod_ready.go:92] pod "kube-proxy-qd84x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.522544   58417 pod_ready.go:81] duration metric: took 4.861257ms for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522551   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687964   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.687987   58417 pod_ready.go:81] duration metric: took 165.428951ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687997   58417 pod_ready.go:38] duration metric: took 8.257437931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:41.688016   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:41.688069   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:41.705213   58417 api_server.go:72] duration metric: took 8.573000368s to wait for apiserver process to appear ...
	I0719 15:52:41.705236   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:41.705256   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:52:41.709425   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:52:41.710427   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:52:41.710447   58417 api_server.go:131] duration metric: took 5.203308ms to wait for apiserver health ...
	I0719 15:52:41.710455   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:41.890063   58417 system_pods.go:59] 9 kube-system pods found
	I0719 15:52:41.890091   58417 system_pods.go:61] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:41.890095   58417 system_pods.go:61] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:41.890099   58417 system_pods.go:61] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:41.890103   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:41.890106   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:41.890109   58417 system_pods.go:61] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:41.890112   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:41.890117   58417 system_pods.go:61] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:41.890121   58417 system_pods.go:61] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:41.890128   58417 system_pods.go:74] duration metric: took 179.666477ms to wait for pod list to return data ...
	I0719 15:52:41.890135   58417 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.086946   58417 default_sa.go:45] found service account: "default"
	I0719 15:52:42.086973   58417 default_sa.go:55] duration metric: took 196.832888ms for default service account to be created ...
	I0719 15:52:42.086984   58417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.289457   58417 system_pods.go:86] 9 kube-system pods found
	I0719 15:52:42.289483   58417 system_pods.go:89] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:42.289489   58417 system_pods.go:89] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:42.289493   58417 system_pods.go:89] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:42.289498   58417 system_pods.go:89] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:42.289502   58417 system_pods.go:89] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:42.289506   58417 system_pods.go:89] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:42.289510   58417 system_pods.go:89] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:42.289518   58417 system_pods.go:89] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.289523   58417 system_pods.go:89] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:42.289530   58417 system_pods.go:126] duration metric: took 202.54151ms to wait for k8s-apps to be running ...
	I0719 15:52:42.289536   58417 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.289575   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.304866   58417 system_svc.go:56] duration metric: took 15.319153ms WaitForService to wait for kubelet
	I0719 15:52:42.304931   58417 kubeadm.go:582] duration metric: took 9.172718104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.304958   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.488087   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.488108   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.488122   58417 node_conditions.go:105] duration metric: took 183.159221ms to run NodePressure ...
	I0719 15:52:42.488135   58417 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.488144   58417 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.488157   58417 start.go:255] writing updated cluster config ...
	I0719 15:52:42.488453   58417 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.536465   58417 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:52:42.538606   58417 out.go:177] * Done! kubectl is now configured to use "no-preload-382231" cluster and "default" namespace by default
	I0719 15:52:39.422000   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:39.422034   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:39.473826   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:39.473860   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:39.515998   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:39.516023   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:39.559475   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:39.559506   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:39.574174   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:39.574205   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.615906   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:39.615933   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.676764   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:39.676795   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.714437   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:39.714467   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:42.584088   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:42.584114   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.584119   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.584123   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.584127   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.584130   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.584133   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.584138   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.584143   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.584150   59208 system_pods.go:74] duration metric: took 3.801182741s to wait for pod list to return data ...
	I0719 15:52:42.584156   59208 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.586910   59208 default_sa.go:45] found service account: "default"
	I0719 15:52:42.586934   59208 default_sa.go:55] duration metric: took 2.771722ms for default service account to be created ...
	I0719 15:52:42.586943   59208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.593611   59208 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:42.593634   59208 system_pods.go:89] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.593639   59208 system_pods.go:89] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.593645   59208 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.593650   59208 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.593654   59208 system_pods.go:89] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.593658   59208 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.593669   59208 system_pods.go:89] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.593673   59208 system_pods.go:89] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.593680   59208 system_pods.go:126] duration metric: took 6.731347ms to wait for k8s-apps to be running ...
	I0719 15:52:42.593687   59208 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.593726   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.615811   59208 system_svc.go:56] duration metric: took 22.114487ms WaitForService to wait for kubelet
	I0719 15:52:42.615841   59208 kubeadm.go:582] duration metric: took 4m28.151407807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.615864   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.619021   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.619040   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.619050   59208 node_conditions.go:105] duration metric: took 3.180958ms to run NodePressure ...
	I0719 15:52:42.619060   59208 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.619067   59208 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.619079   59208 start.go:255] writing updated cluster config ...
	I0719 15:52:42.619329   59208 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.677117   59208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:42.679317   59208 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-601445" cluster and "default" namespace by default
	I0719 15:52:41.514013   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:44.012173   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:46.013717   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:48.013121   58376 pod_ready.go:81] duration metric: took 4m0.006772624s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:48.013143   58376 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:48.013150   58376 pod_ready.go:38] duration metric: took 4m4.417474484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:48.013165   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:48.013194   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:48.013234   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:48.067138   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.067166   58376 cri.go:89] found id: ""
	I0719 15:52:48.067175   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:48.067218   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.071486   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:48.071531   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:48.115491   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.115514   58376 cri.go:89] found id: ""
	I0719 15:52:48.115525   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:48.115583   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.119693   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:48.119750   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:48.161158   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.161185   58376 cri.go:89] found id: ""
	I0719 15:52:48.161194   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:48.161257   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.165533   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:48.165584   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:48.207507   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.207528   58376 cri.go:89] found id: ""
	I0719 15:52:48.207537   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:48.207596   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.212070   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:48.212145   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:48.250413   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.250441   58376 cri.go:89] found id: ""
	I0719 15:52:48.250451   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:48.250510   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.255025   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:48.255095   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:48.289898   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.289922   58376 cri.go:89] found id: ""
	I0719 15:52:48.289930   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:48.289976   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.294440   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:48.294489   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:48.329287   58376 cri.go:89] found id: ""
	I0719 15:52:48.329314   58376 logs.go:276] 0 containers: []
	W0719 15:52:48.329326   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:48.329332   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:48.329394   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:48.373215   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.373242   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.373248   58376 cri.go:89] found id: ""
	I0719 15:52:48.373257   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:48.373311   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.377591   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.381610   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:48.381635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:48.440106   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:48.440148   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:48.455200   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:48.455234   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.496729   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:48.496757   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.535475   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:48.535501   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.592954   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:48.592993   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.635925   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:48.635957   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.671611   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:48.671642   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:48.809648   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:48.809681   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.863327   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:48.863361   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.902200   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:48.902245   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.937497   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:48.937525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:49.446900   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:49.446933   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:51.988535   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:52.005140   58376 api_server.go:72] duration metric: took 4m16.116469116s to wait for apiserver process to appear ...
	I0719 15:52:52.005165   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:52.005206   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:52.005258   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:52.041113   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.041143   58376 cri.go:89] found id: ""
	I0719 15:52:52.041150   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:52.041199   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.045292   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:52.045349   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:52.086747   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.086770   58376 cri.go:89] found id: ""
	I0719 15:52:52.086778   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:52.086821   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.091957   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:52.092015   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:52.128096   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.128128   58376 cri.go:89] found id: ""
	I0719 15:52:52.128138   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:52.128204   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.132889   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:52.132949   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:52.168359   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.168389   58376 cri.go:89] found id: ""
	I0719 15:52:52.168398   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:52.168454   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.172577   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:52.172639   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:52.211667   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.211684   58376 cri.go:89] found id: ""
	I0719 15:52:52.211691   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:52.211740   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.215827   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:52.215893   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:52.252105   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.252130   58376 cri.go:89] found id: ""
	I0719 15:52:52.252140   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:52.252194   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.256407   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:52.256464   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:52.292646   58376 cri.go:89] found id: ""
	I0719 15:52:52.292675   58376 logs.go:276] 0 containers: []
	W0719 15:52:52.292685   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:52.292693   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:52.292755   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:52.326845   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.326875   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.326880   58376 cri.go:89] found id: ""
	I0719 15:52:52.326889   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:52.326946   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.331338   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.335530   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:52.335554   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.371981   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:52.372010   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.406921   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:52.406946   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.442975   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:52.443007   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:52.497838   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:52.497873   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:52.556739   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:52.556776   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:52.665610   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:52.665643   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.711547   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:52.711580   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.759589   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:52.759634   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.807300   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:52.807374   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.857159   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:52.857186   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.917896   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:52.917931   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:53.342603   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:53.342646   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:55.857727   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:52:55.861835   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:52:55.862804   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:55.862822   58376 api_server.go:131] duration metric: took 3.857650801s to wait for apiserver health ...
	I0719 15:52:55.862829   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:55.862852   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:55.862905   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:55.900840   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:55.900859   58376 cri.go:89] found id: ""
	I0719 15:52:55.900866   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:55.900909   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.906205   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:55.906291   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:55.950855   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:55.950879   58376 cri.go:89] found id: ""
	I0719 15:52:55.950887   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:55.950939   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.955407   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:55.955472   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:55.994954   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:55.994981   58376 cri.go:89] found id: ""
	I0719 15:52:55.994992   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:55.995052   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.999179   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:55.999241   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:56.036497   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.036521   58376 cri.go:89] found id: ""
	I0719 15:52:56.036530   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:56.036585   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.041834   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:56.041900   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:56.082911   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.082934   58376 cri.go:89] found id: ""
	I0719 15:52:56.082943   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:56.082998   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.087505   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:56.087571   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:56.124517   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.124544   58376 cri.go:89] found id: ""
	I0719 15:52:56.124554   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:56.124616   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.129221   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:56.129297   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:56.170151   58376 cri.go:89] found id: ""
	I0719 15:52:56.170177   58376 logs.go:276] 0 containers: []
	W0719 15:52:56.170193   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:56.170212   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:56.170292   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:56.218351   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:56.218377   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.218381   58376 cri.go:89] found id: ""
	I0719 15:52:56.218388   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:56.218437   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.223426   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.227742   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:56.227759   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.271701   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:56.271733   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:56.325333   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:56.325366   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:56.431391   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:56.431423   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:56.485442   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:56.485472   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:56.527493   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:56.527525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.563260   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:56.563289   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.600604   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:56.600635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.656262   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:56.656305   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:57.031511   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:57.031549   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:57.046723   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:57.046748   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:57.083358   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:57.083390   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:57.124108   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:57.124136   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:59.670804   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:59.670831   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.670836   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.670840   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.670844   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.670847   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.670850   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.670855   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.670859   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.670865   58376 system_pods.go:74] duration metric: took 3.808031391s to wait for pod list to return data ...
	I0719 15:52:59.670871   58376 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:59.673231   58376 default_sa.go:45] found service account: "default"
	I0719 15:52:59.673249   58376 default_sa.go:55] duration metric: took 2.372657ms for default service account to be created ...
	I0719 15:52:59.673255   58376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:59.678267   58376 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:59.678289   58376 system_pods.go:89] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.678296   58376 system_pods.go:89] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.678303   58376 system_pods.go:89] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.678310   58376 system_pods.go:89] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.678315   58376 system_pods.go:89] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.678322   58376 system_pods.go:89] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.678331   58376 system_pods.go:89] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.678341   58376 system_pods.go:89] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.678352   58376 system_pods.go:126] duration metric: took 5.090968ms to wait for k8s-apps to be running ...
	I0719 15:52:59.678362   58376 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:59.678411   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:59.695116   58376 system_svc.go:56] duration metric: took 16.750228ms WaitForService to wait for kubelet
	I0719 15:52:59.695139   58376 kubeadm.go:582] duration metric: took 4m23.806469478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:59.695163   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:59.697573   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:59.697592   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:59.697602   58376 node_conditions.go:105] duration metric: took 2.433643ms to run NodePressure ...
	I0719 15:52:59.697612   58376 start.go:241] waiting for startup goroutines ...
	I0719 15:52:59.697618   58376 start.go:246] waiting for cluster config update ...
	I0719 15:52:59.697629   58376 start.go:255] writing updated cluster config ...
	I0719 15:52:59.697907   58376 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:59.744965   58376 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:59.746888   58376 out.go:177] * Done! kubectl is now configured to use "embed-certs-817144" cluster and "default" namespace by default
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.555562879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405089555542030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3755ec2d-0ac0-4de9-8866-75447b9672e4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.556173236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=615b9fe5-b014-4fbf-85f1-43933a0593e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.556246484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=615b9fe5-b014-4fbf-85f1-43933a0593e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.556285934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=615b9fe5-b014-4fbf-85f1-43933a0593e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.590202981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01200d6f-68d0-40b5-adbb-eedac06076bf name=/runtime.v1.RuntimeService/Version
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.590305165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01200d6f-68d0-40b5-adbb-eedac06076bf name=/runtime.v1.RuntimeService/Version
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.591787802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ad198eb-d3fa-4d56-a3e6-28a364b4dc51 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.592395235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405089592365823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ad198eb-d3fa-4d56-a3e6-28a364b4dc51 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.592990575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=454e37b5-6391-445a-b86f-315e837c94a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.593044766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=454e37b5-6391-445a-b86f-315e837c94a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.593136083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=454e37b5-6391-445a-b86f-315e837c94a9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.634673422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15a98aeb-5d2f-4cab-a304-cd55e730e260 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.634770141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15a98aeb-5d2f-4cab-a304-cd55e730e260 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.636000395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7efbb1b0-198b-4489-af42-ec7fd2a7bca8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.636502166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405089636476806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7efbb1b0-198b-4489-af42-ec7fd2a7bca8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.637018381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11ae1ae1-1c8b-49cd-a8d7-1c0ec60a502b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.637145122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11ae1ae1-1c8b-49cd-a8d7-1c0ec60a502b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.637198617Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=11ae1ae1-1c8b-49cd-a8d7-1c0ec60a502b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.671398415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcf1a300-a6b3-4a9f-8379-96449e7a7b8f name=/runtime.v1.RuntimeService/Version
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.671537937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcf1a300-a6b3-4a9f-8379-96449e7a7b8f name=/runtime.v1.RuntimeService/Version
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.672910248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1bf0684e-dee2-4041-844e-ac27dc58ea44 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.673419204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405089673395521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1bf0684e-dee2-4041-844e-ac27dc58ea44 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.673976075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc13aedd-4ffb-40b3-958b-877456819e95 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.674054892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc13aedd-4ffb-40b3-958b-877456819e95 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:04:49 old-k8s-version-862924 crio[647]: time="2024-07-19 16:04:49.674148726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fc13aedd-4ffb-40b3-958b-877456819e95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul19 15:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051724] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039649] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.567082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.332449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.594221] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.070476] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.062261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077473] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.217641] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.149423] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.267895] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +6.718838] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.715316] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[ +12.018302] kauditd_printk_skb: 46 callbacks suppressed
	[Jul19 15:51] systemd-fstab-generator[5022]: Ignoring "noauto" option for root device
	[Jul19 15:53] systemd-fstab-generator[5300]: Ignoring "noauto" option for root device
	[  +0.062109] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:04:49 up 17 min,  0 users,  load average: 0.06, 0.03, 0.03
	Linux old-k8s-version-862924 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c58060, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000a50e70, 0x24, 0x0, ...)
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: net.(*Dialer).DialContext(0xc0001a1920, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc000a50e70, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000875e80, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc000a50e70, 0x24, 0x60, 0x7f7575e36758, 0x118, ...)
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: net/http.(*Transport).dial(0xc000872dc0, 0x4f7fe00, 0xc000122018, 0x48ab5d6, 0x3, 0xc000a50e70, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: net/http.(*Transport).dialConn(0xc000872dc0, 0x4f7fe00, 0xc000122018, 0x0, 0xc000b4efc0, 0x5, 0xc000a50e70, 0x24, 0x0, 0xc000c52120, ...)
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: net/http.(*Transport).dialConnFor(0xc000872dc0, 0xc000affce0)
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]: created by net/http.(*Transport).queueForDial
	Jul 19 16:04:44 old-k8s-version-862924 kubelet[6473]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 19 16:04:44 old-k8s-version-862924 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 19 16:04:44 old-k8s-version-862924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 19 16:04:44 old-k8s-version-862924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jul 19 16:04:44 old-k8s-version-862924 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 19 16:04:44 old-k8s-version-862924 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 19 16:04:45 old-k8s-version-862924 kubelet[6483]: I0719 16:04:45.062274    6483 server.go:416] Version: v1.20.0
	Jul 19 16:04:45 old-k8s-version-862924 kubelet[6483]: I0719 16:04:45.062576    6483 server.go:837] Client rotation is on, will bootstrap in background
	Jul 19 16:04:45 old-k8s-version-862924 kubelet[6483]: I0719 16:04:45.064637    6483 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 19 16:04:45 old-k8s-version-862924 kubelet[6483]: W0719 16:04:45.065647    6483 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 19 16:04:45 old-k8s-version-862924 kubelet[6483]: I0719 16:04:45.065754    6483 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (221.937818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-862924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (340.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-382231 -n no-preload-382231
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-19 16:07:27.018706523 +0000 UTC m=+6421.814293042
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-382231 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-382231 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.501µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-382231 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-382231 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-382231 logs -n 25: (1.307720794s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 16:06 UTC | 19 Jul 24 16:06 UTC |
	| start   | -p newest-cni-850417 --memory=2200 --alsologtostderr   | newest-cni-850417            | jenkins | v1.33.1 | 19 Jul 24 16:06 UTC | 19 Jul 24 16:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-850417             | newest-cni-850417            | jenkins | v1.33.1 | 19 Jul 24 16:07 UTC | 19 Jul 24 16:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-850417                                   | newest-cni-850417            | jenkins | v1.33.1 | 19 Jul 24 16:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 16:06:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:06:37.207049   65029 out.go:291] Setting OutFile to fd 1 ...
	I0719 16:06:37.207163   65029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 16:06:37.207170   65029 out.go:304] Setting ErrFile to fd 2...
	I0719 16:06:37.207175   65029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 16:06:37.207366   65029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 16:06:37.207907   65029 out.go:298] Setting JSON to false
	I0719 16:06:37.208781   65029 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6543,"bootTime":1721398654,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 16:06:37.208838   65029 start.go:139] virtualization: kvm guest
	I0719 16:06:37.211145   65029 out.go:177] * [newest-cni-850417] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 16:06:37.212299   65029 notify.go:220] Checking for updates...
	I0719 16:06:37.212316   65029 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 16:06:37.213378   65029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:06:37.214492   65029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 16:06:37.215855   65029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:06:37.216827   65029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 16:06:37.217864   65029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:06:37.219308   65029 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:06:37.219393   65029 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:06:37.219478   65029 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 16:06:37.219548   65029 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 16:06:37.255759   65029 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 16:06:37.256892   65029 start.go:297] selected driver: kvm2
	I0719 16:06:37.256912   65029 start.go:901] validating driver "kvm2" against <nil>
	I0719 16:06:37.256933   65029 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:06:37.257625   65029 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:06:37.257720   65029 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 16:06:37.273308   65029 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 16:06:37.273363   65029 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0719 16:06:37.273391   65029 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 16:06:37.273638   65029 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 16:06:37.273715   65029 cni.go:84] Creating CNI manager for ""
	I0719 16:06:37.273730   65029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 16:06:37.273739   65029 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:06:37.273810   65029 start.go:340] cluster config:
	{Name:newest-cni-850417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-850417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 16:06:37.273963   65029 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:06:37.276193   65029 out.go:177] * Starting "newest-cni-850417" primary control-plane node in "newest-cni-850417" cluster
	I0719 16:06:37.277539   65029 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 16:06:37.277576   65029 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 16:06:37.277606   65029 cache.go:56] Caching tarball of preloaded images
	I0719 16:06:37.277705   65029 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 16:06:37.277719   65029 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0719 16:06:37.277830   65029 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/config.json ...
	I0719 16:06:37.277856   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/config.json: {Name:mk5231e8530ec1717beb96a35348cfe0c9c2a548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:06:37.278013   65029 start.go:360] acquireMachinesLock for newest-cni-850417: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:06:37.278043   65029 start.go:364] duration metric: took 16.003µs to acquireMachinesLock for "newest-cni-850417"
	I0719 16:06:37.278060   65029 start.go:93] Provisioning new machine with config: &{Name:newest-cni-850417 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-850417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 16:06:37.278118   65029 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 16:06:37.279681   65029 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:06:37.279818   65029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:06:37.279861   65029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:06:37.295611   65029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44887
	I0719 16:06:37.296088   65029 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:06:37.296748   65029 main.go:141] libmachine: Using API Version  1
	I0719 16:06:37.296773   65029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:06:37.297159   65029 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:06:37.297376   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetMachineName
	I0719 16:06:37.297539   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:06:37.297697   65029 start.go:159] libmachine.API.Create for "newest-cni-850417" (driver="kvm2")
	I0719 16:06:37.297723   65029 client.go:168] LocalClient.Create starting
	I0719 16:06:37.297751   65029 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 16:06:37.297805   65029 main.go:141] libmachine: Decoding PEM data...
	I0719 16:06:37.297827   65029 main.go:141] libmachine: Parsing certificate...
	I0719 16:06:37.297894   65029 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 16:06:37.297924   65029 main.go:141] libmachine: Decoding PEM data...
	I0719 16:06:37.297946   65029 main.go:141] libmachine: Parsing certificate...
	I0719 16:06:37.297972   65029 main.go:141] libmachine: Running pre-create checks...
	I0719 16:06:37.297993   65029 main.go:141] libmachine: (newest-cni-850417) Calling .PreCreateCheck
	I0719 16:06:37.298356   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetConfigRaw
	I0719 16:06:37.298793   65029 main.go:141] libmachine: Creating machine...
	I0719 16:06:37.298809   65029 main.go:141] libmachine: (newest-cni-850417) Calling .Create
	I0719 16:06:37.298960   65029 main.go:141] libmachine: (newest-cni-850417) Creating KVM machine...
	I0719 16:06:37.300512   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found existing default KVM network
	I0719 16:06:37.301671   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:37.301498   65051 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:cb:27} reservation:<nil>}
	I0719 16:06:37.302718   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:37.302649   65051 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000288860}
	I0719 16:06:37.302780   65029 main.go:141] libmachine: (newest-cni-850417) DBG | created network xml: 
	I0719 16:06:37.302812   65029 main.go:141] libmachine: (newest-cni-850417) DBG | <network>
	I0719 16:06:37.302835   65029 main.go:141] libmachine: (newest-cni-850417) DBG |   <name>mk-newest-cni-850417</name>
	I0719 16:06:37.302851   65029 main.go:141] libmachine: (newest-cni-850417) DBG |   <dns enable='no'/>
	I0719 16:06:37.302860   65029 main.go:141] libmachine: (newest-cni-850417) DBG |   
	I0719 16:06:37.302874   65029 main.go:141] libmachine: (newest-cni-850417) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0719 16:06:37.302886   65029 main.go:141] libmachine: (newest-cni-850417) DBG |     <dhcp>
	I0719 16:06:37.302900   65029 main.go:141] libmachine: (newest-cni-850417) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0719 16:06:37.302912   65029 main.go:141] libmachine: (newest-cni-850417) DBG |     </dhcp>
	I0719 16:06:37.302922   65029 main.go:141] libmachine: (newest-cni-850417) DBG |   </ip>
	I0719 16:06:37.302946   65029 main.go:141] libmachine: (newest-cni-850417) DBG |   
	I0719 16:06:37.302957   65029 main.go:141] libmachine: (newest-cni-850417) DBG | </network>
	I0719 16:06:37.302981   65029 main.go:141] libmachine: (newest-cni-850417) DBG | 
	I0719 16:06:37.308432   65029 main.go:141] libmachine: (newest-cni-850417) DBG | trying to create private KVM network mk-newest-cni-850417 192.168.50.0/24...
	I0719 16:06:37.383253   65029 main.go:141] libmachine: (newest-cni-850417) DBG | private KVM network mk-newest-cni-850417 192.168.50.0/24 created
	I0719 16:06:37.383290   65029 main.go:141] libmachine: (newest-cni-850417) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417 ...
	I0719 16:06:37.383302   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:37.383241   65051 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:06:37.383321   65029 main.go:141] libmachine: (newest-cni-850417) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 16:06:37.383604   65029 main.go:141] libmachine: (newest-cni-850417) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 16:06:37.611049   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:37.610919   65051 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa...
	I0719 16:06:37.789583   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:37.789457   65051 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/newest-cni-850417.rawdisk...
	I0719 16:06:37.789632   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Writing magic tar header
	I0719 16:06:37.789654   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Writing SSH key tar header
	I0719 16:06:37.789664   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:37.789607   65051 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417 ...
	I0719 16:06:37.789762   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417
	I0719 16:06:37.789789   65029 main.go:141] libmachine: (newest-cni-850417) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417 (perms=drwx------)
	I0719 16:06:37.789806   65029 main.go:141] libmachine: (newest-cni-850417) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 16:06:37.789820   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 16:06:37.789836   65029 main.go:141] libmachine: (newest-cni-850417) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 16:06:37.789852   65029 main.go:141] libmachine: (newest-cni-850417) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 16:06:37.789866   65029 main.go:141] libmachine: (newest-cni-850417) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 16:06:37.789879   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:06:37.789896   65029 main.go:141] libmachine: (newest-cni-850417) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 16:06:37.789910   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 16:06:37.789919   65029 main.go:141] libmachine: (newest-cni-850417) Creating domain...
	I0719 16:06:37.789928   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 16:06:37.789937   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home/jenkins
	I0719 16:06:37.789947   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Checking permissions on dir: /home
	I0719 16:06:37.789958   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Skipping /home - not owner
	I0719 16:06:37.791295   65029 main.go:141] libmachine: (newest-cni-850417) define libvirt domain using xml: 
	I0719 16:06:37.791326   65029 main.go:141] libmachine: (newest-cni-850417) <domain type='kvm'>
	I0719 16:06:37.791337   65029 main.go:141] libmachine: (newest-cni-850417)   <name>newest-cni-850417</name>
	I0719 16:06:37.791346   65029 main.go:141] libmachine: (newest-cni-850417)   <memory unit='MiB'>2200</memory>
	I0719 16:06:37.791354   65029 main.go:141] libmachine: (newest-cni-850417)   <vcpu>2</vcpu>
	I0719 16:06:37.791361   65029 main.go:141] libmachine: (newest-cni-850417)   <features>
	I0719 16:06:37.791370   65029 main.go:141] libmachine: (newest-cni-850417)     <acpi/>
	I0719 16:06:37.791378   65029 main.go:141] libmachine: (newest-cni-850417)     <apic/>
	I0719 16:06:37.791388   65029 main.go:141] libmachine: (newest-cni-850417)     <pae/>
	I0719 16:06:37.791397   65029 main.go:141] libmachine: (newest-cni-850417)     
	I0719 16:06:37.791406   65029 main.go:141] libmachine: (newest-cni-850417)   </features>
	I0719 16:06:37.791418   65029 main.go:141] libmachine: (newest-cni-850417)   <cpu mode='host-passthrough'>
	I0719 16:06:37.791427   65029 main.go:141] libmachine: (newest-cni-850417)   
	I0719 16:06:37.791431   65029 main.go:141] libmachine: (newest-cni-850417)   </cpu>
	I0719 16:06:37.791439   65029 main.go:141] libmachine: (newest-cni-850417)   <os>
	I0719 16:06:37.791450   65029 main.go:141] libmachine: (newest-cni-850417)     <type>hvm</type>
	I0719 16:06:37.791467   65029 main.go:141] libmachine: (newest-cni-850417)     <boot dev='cdrom'/>
	I0719 16:06:37.791477   65029 main.go:141] libmachine: (newest-cni-850417)     <boot dev='hd'/>
	I0719 16:06:37.791486   65029 main.go:141] libmachine: (newest-cni-850417)     <bootmenu enable='no'/>
	I0719 16:06:37.791500   65029 main.go:141] libmachine: (newest-cni-850417)   </os>
	I0719 16:06:37.791508   65029 main.go:141] libmachine: (newest-cni-850417)   <devices>
	I0719 16:06:37.791519   65029 main.go:141] libmachine: (newest-cni-850417)     <disk type='file' device='cdrom'>
	I0719 16:06:37.791547   65029 main.go:141] libmachine: (newest-cni-850417)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/boot2docker.iso'/>
	I0719 16:06:37.791560   65029 main.go:141] libmachine: (newest-cni-850417)       <target dev='hdc' bus='scsi'/>
	I0719 16:06:37.791581   65029 main.go:141] libmachine: (newest-cni-850417)       <readonly/>
	I0719 16:06:37.791598   65029 main.go:141] libmachine: (newest-cni-850417)     </disk>
	I0719 16:06:37.791605   65029 main.go:141] libmachine: (newest-cni-850417)     <disk type='file' device='disk'>
	I0719 16:06:37.791612   65029 main.go:141] libmachine: (newest-cni-850417)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 16:06:37.791621   65029 main.go:141] libmachine: (newest-cni-850417)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/newest-cni-850417.rawdisk'/>
	I0719 16:06:37.791639   65029 main.go:141] libmachine: (newest-cni-850417)       <target dev='hda' bus='virtio'/>
	I0719 16:06:37.791645   65029 main.go:141] libmachine: (newest-cni-850417)     </disk>
	I0719 16:06:37.791653   65029 main.go:141] libmachine: (newest-cni-850417)     <interface type='network'>
	I0719 16:06:37.791662   65029 main.go:141] libmachine: (newest-cni-850417)       <source network='mk-newest-cni-850417'/>
	I0719 16:06:37.791666   65029 main.go:141] libmachine: (newest-cni-850417)       <model type='virtio'/>
	I0719 16:06:37.791671   65029 main.go:141] libmachine: (newest-cni-850417)     </interface>
	I0719 16:06:37.791676   65029 main.go:141] libmachine: (newest-cni-850417)     <interface type='network'>
	I0719 16:06:37.791683   65029 main.go:141] libmachine: (newest-cni-850417)       <source network='default'/>
	I0719 16:06:37.791687   65029 main.go:141] libmachine: (newest-cni-850417)       <model type='virtio'/>
	I0719 16:06:37.791693   65029 main.go:141] libmachine: (newest-cni-850417)     </interface>
	I0719 16:06:37.791699   65029 main.go:141] libmachine: (newest-cni-850417)     <serial type='pty'>
	I0719 16:06:37.791721   65029 main.go:141] libmachine: (newest-cni-850417)       <target port='0'/>
	I0719 16:06:37.791738   65029 main.go:141] libmachine: (newest-cni-850417)     </serial>
	I0719 16:06:37.791763   65029 main.go:141] libmachine: (newest-cni-850417)     <console type='pty'>
	I0719 16:06:37.791780   65029 main.go:141] libmachine: (newest-cni-850417)       <target type='serial' port='0'/>
	I0719 16:06:37.791789   65029 main.go:141] libmachine: (newest-cni-850417)     </console>
	I0719 16:06:37.791800   65029 main.go:141] libmachine: (newest-cni-850417)     <rng model='virtio'>
	I0719 16:06:37.791818   65029 main.go:141] libmachine: (newest-cni-850417)       <backend model='random'>/dev/random</backend>
	I0719 16:06:37.791827   65029 main.go:141] libmachine: (newest-cni-850417)     </rng>
	I0719 16:06:37.791834   65029 main.go:141] libmachine: (newest-cni-850417)     
	I0719 16:06:37.791840   65029 main.go:141] libmachine: (newest-cni-850417)     
	I0719 16:06:37.791848   65029 main.go:141] libmachine: (newest-cni-850417)   </devices>
	I0719 16:06:37.791857   65029 main.go:141] libmachine: (newest-cni-850417) </domain>
	I0719 16:06:37.791864   65029 main.go:141] libmachine: (newest-cni-850417) 
	I0719 16:06:37.796379   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:32:41:64 in network default
	I0719 16:06:37.797174   65029 main.go:141] libmachine: (newest-cni-850417) Ensuring networks are active...
	I0719 16:06:37.797190   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:37.797904   65029 main.go:141] libmachine: (newest-cni-850417) Ensuring network default is active
	I0719 16:06:37.798219   65029 main.go:141] libmachine: (newest-cni-850417) Ensuring network mk-newest-cni-850417 is active
	I0719 16:06:37.798821   65029 main.go:141] libmachine: (newest-cni-850417) Getting domain xml...
	I0719 16:06:37.799647   65029 main.go:141] libmachine: (newest-cni-850417) Creating domain...
	I0719 16:06:39.044701   65029 main.go:141] libmachine: (newest-cni-850417) Waiting to get IP...
	I0719 16:06:39.045554   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:39.046011   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:39.046093   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:39.046008   65051 retry.go:31] will retry after 241.323131ms: waiting for machine to come up
	I0719 16:06:39.288461   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:39.288904   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:39.288938   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:39.288861   65051 retry.go:31] will retry after 304.955839ms: waiting for machine to come up
	I0719 16:06:39.595431   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:39.595885   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:39.595913   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:39.595818   65051 retry.go:31] will retry after 329.899369ms: waiting for machine to come up
	I0719 16:06:39.927275   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:39.927726   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:39.927763   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:39.927693   65051 retry.go:31] will retry after 408.938499ms: waiting for machine to come up
	I0719 16:06:40.338207   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:40.338650   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:40.338679   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:40.338597   65051 retry.go:31] will retry after 707.806357ms: waiting for machine to come up
	I0719 16:06:41.048420   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:41.048864   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:41.048893   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:41.048804   65051 retry.go:31] will retry after 862.510278ms: waiting for machine to come up
	I0719 16:06:41.912785   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:41.913205   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:41.913238   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:41.913134   65051 retry.go:31] will retry after 949.617312ms: waiting for machine to come up
	I0719 16:06:42.864359   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:42.864719   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:42.864747   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:42.864690   65051 retry.go:31] will retry after 1.222548405s: waiting for machine to come up
	I0719 16:06:44.089244   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:44.089691   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:44.089720   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:44.089638   65051 retry.go:31] will retry after 1.804585844s: waiting for machine to come up
	I0719 16:06:45.895741   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:45.896282   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:45.896329   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:45.896241   65051 retry.go:31] will retry after 2.075181654s: waiting for machine to come up
	I0719 16:06:47.972866   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:47.973330   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:47.973357   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:47.973298   65051 retry.go:31] will retry after 2.573143921s: waiting for machine to come up
	I0719 16:06:50.547505   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:50.547890   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:50.547913   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:50.547866   65051 retry.go:31] will retry after 2.902588977s: waiting for machine to come up
	I0719 16:06:53.451544   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:53.452048   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:53.452077   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:53.451998   65051 retry.go:31] will retry after 2.739924741s: waiting for machine to come up
	I0719 16:06:56.195038   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:06:56.195572   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find current IP address of domain newest-cni-850417 in network mk-newest-cni-850417
	I0719 16:06:56.195603   65029 main.go:141] libmachine: (newest-cni-850417) DBG | I0719 16:06:56.195524   65051 retry.go:31] will retry after 4.398411301s: waiting for machine to come up
	I0719 16:07:00.595749   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.596177   65029 main.go:141] libmachine: (newest-cni-850417) Found IP for machine: 192.168.50.198
	I0719 16:07:00.596233   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has current primary IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.596256   65029 main.go:141] libmachine: (newest-cni-850417) Reserving static IP address...
	I0719 16:07:00.596572   65029 main.go:141] libmachine: (newest-cni-850417) DBG | unable to find host DHCP lease matching {name: "newest-cni-850417", mac: "52:54:00:59:53:78", ip: "192.168.50.198"} in network mk-newest-cni-850417
	I0719 16:07:00.672297   65029 main.go:141] libmachine: (newest-cni-850417) Reserved static IP address: 192.168.50.198
	I0719 16:07:00.672326   65029 main.go:141] libmachine: (newest-cni-850417) Waiting for SSH to be available...
	I0719 16:07:00.672334   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Getting to WaitForSSH function...
	I0719 16:07:00.674948   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.675325   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:53:78}
	I0719 16:07:00.675356   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.675547   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Using SSH client type: external
	I0719 16:07:00.675575   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa (-rw-------)
	I0719 16:07:00.675611   65029 main.go:141] libmachine: (newest-cni-850417) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 16:07:00.675628   65029 main.go:141] libmachine: (newest-cni-850417) DBG | About to run SSH command:
	I0719 16:07:00.675644   65029 main.go:141] libmachine: (newest-cni-850417) DBG | exit 0
	I0719 16:07:00.798852   65029 main.go:141] libmachine: (newest-cni-850417) DBG | SSH cmd err, output: <nil>: 
	I0719 16:07:00.799089   65029 main.go:141] libmachine: (newest-cni-850417) KVM machine creation complete!
	I0719 16:07:00.799394   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetConfigRaw
	I0719 16:07:00.799956   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:00.800133   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:00.800306   65029 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 16:07:00.800320   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetState
	I0719 16:07:00.801630   65029 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 16:07:00.801643   65029 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 16:07:00.801650   65029 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 16:07:00.801660   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:00.804284   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.804630   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:00.804657   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.804814   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:00.804992   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:00.805176   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:00.805304   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:00.805469   65029 main.go:141] libmachine: Using SSH client type: native
	I0719 16:07:00.805733   65029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0719 16:07:00.805752   65029 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 16:07:00.901673   65029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 16:07:00.901702   65029 main.go:141] libmachine: Detecting the provisioner...
	I0719 16:07:00.901728   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:00.904780   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.905167   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:00.905198   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:00.905380   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:00.905539   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:00.905711   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:00.905866   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:00.906031   65029 main.go:141] libmachine: Using SSH client type: native
	I0719 16:07:00.906208   65029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0719 16:07:00.906219   65029 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 16:07:01.007283   65029 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 16:07:01.007343   65029 main.go:141] libmachine: found compatible host: buildroot
	I0719 16:07:01.007349   65029 main.go:141] libmachine: Provisioning with buildroot...
	I0719 16:07:01.007365   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetMachineName
	I0719 16:07:01.007635   65029 buildroot.go:166] provisioning hostname "newest-cni-850417"
	I0719 16:07:01.007660   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetMachineName
	I0719 16:07:01.007829   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:01.010888   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.011229   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.011259   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.011377   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:01.011573   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.011718   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.011864   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:01.012029   65029 main.go:141] libmachine: Using SSH client type: native
	I0719 16:07:01.012190   65029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0719 16:07:01.012203   65029 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-850417 && echo "newest-cni-850417" | sudo tee /etc/hostname
	I0719 16:07:01.127908   65029 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-850417
	
	I0719 16:07:01.127961   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:01.131025   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.131401   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.131452   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.131617   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:01.131821   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.132007   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.132177   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:01.132415   65029 main.go:141] libmachine: Using SSH client type: native
	I0719 16:07:01.132665   65029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0719 16:07:01.132685   65029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-850417' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-850417/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-850417' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 16:07:01.240093   65029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 16:07:01.240131   65029 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 16:07:01.240153   65029 buildroot.go:174] setting up certificates
	I0719 16:07:01.240166   65029 provision.go:84] configureAuth start
	I0719 16:07:01.240182   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetMachineName
	I0719 16:07:01.240449   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetIP
	I0719 16:07:01.243149   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.243596   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.243622   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.243753   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:01.245982   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.246351   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.246377   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.246536   65029 provision.go:143] copyHostCerts
	I0719 16:07:01.246611   65029 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 16:07:01.246625   65029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 16:07:01.246705   65029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 16:07:01.246835   65029 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 16:07:01.246846   65029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 16:07:01.246885   65029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 16:07:01.246989   65029 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 16:07:01.246999   65029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 16:07:01.247034   65029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 16:07:01.247143   65029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.newest-cni-850417 san=[127.0.0.1 192.168.50.198 localhost minikube newest-cni-850417]
	I0719 16:07:01.543346   65029 provision.go:177] copyRemoteCerts
	I0719 16:07:01.543403   65029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 16:07:01.543477   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:01.546174   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.546493   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.546524   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.546610   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:01.546802   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.546974   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:01.547153   65029 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa Username:docker}
	I0719 16:07:01.633548   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 16:07:01.659620   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 16:07:01.683522   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 16:07:01.707573   65029 provision.go:87] duration metric: took 467.391555ms to configureAuth
	I0719 16:07:01.707610   65029 buildroot.go:189] setting minikube options for container-runtime
	I0719 16:07:01.707846   65029 config.go:182] Loaded profile config "newest-cni-850417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 16:07:01.707959   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:01.711130   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.711638   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.711670   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.711839   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:01.712035   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.712202   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.712399   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:01.712673   65029 main.go:141] libmachine: Using SSH client type: native
	I0719 16:07:01.712912   65029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0719 16:07:01.712940   65029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 16:07:01.989732   65029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 16:07:01.989776   65029 main.go:141] libmachine: Checking connection to Docker...
	I0719 16:07:01.989788   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetURL
	I0719 16:07:01.991108   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Using libvirt version 6000000
	I0719 16:07:01.993724   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.994083   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.994098   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.994399   65029 main.go:141] libmachine: Docker is up and running!
	I0719 16:07:01.994420   65029 main.go:141] libmachine: Reticulating splines...
	I0719 16:07:01.994427   65029 client.go:171] duration metric: took 24.696698185s to LocalClient.Create
	I0719 16:07:01.994455   65029 start.go:167] duration metric: took 24.696757163s to libmachine.API.Create "newest-cni-850417"
	I0719 16:07:01.994467   65029 start.go:293] postStartSetup for "newest-cni-850417" (driver="kvm2")
	I0719 16:07:01.994487   65029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 16:07:01.994509   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:01.994722   65029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 16:07:01.994745   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:01.996940   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.997250   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:01.997278   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:01.997389   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:01.997572   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:01.997742   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:01.997893   65029 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa Username:docker}
	I0719 16:07:02.077007   65029 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 16:07:02.081734   65029 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 16:07:02.081760   65029 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 16:07:02.081830   65029 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 16:07:02.081922   65029 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 16:07:02.082043   65029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 16:07:02.092110   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 16:07:02.117854   65029 start.go:296] duration metric: took 123.364218ms for postStartSetup
	I0719 16:07:02.117913   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetConfigRaw
	I0719 16:07:02.118474   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetIP
	I0719 16:07:02.121533   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.121993   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:02.122025   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.122252   65029 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/config.json ...
	I0719 16:07:02.122442   65029 start.go:128] duration metric: took 24.84431377s to createHost
	I0719 16:07:02.122463   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:02.125090   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.125409   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:02.125441   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.125600   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:02.125937   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:02.126115   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:02.126304   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:02.126500   65029 main.go:141] libmachine: Using SSH client type: native
	I0719 16:07:02.126711   65029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.198 22 <nil> <nil>}
	I0719 16:07:02.126726   65029 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 16:07:02.231167   65029 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721405222.208789206
	
	I0719 16:07:02.231192   65029 fix.go:216] guest clock: 1721405222.208789206
	I0719 16:07:02.231201   65029 fix.go:229] Guest: 2024-07-19 16:07:02.208789206 +0000 UTC Remote: 2024-07-19 16:07:02.122453051 +0000 UTC m=+24.951821706 (delta=86.336155ms)
	I0719 16:07:02.231219   65029 fix.go:200] guest clock delta is within tolerance: 86.336155ms
	I0719 16:07:02.231224   65029 start.go:83] releasing machines lock for "newest-cni-850417", held for 24.953172463s
	I0719 16:07:02.231242   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:02.231492   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetIP
	I0719 16:07:02.235039   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.235627   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:02.235662   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.235887   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:02.236436   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:02.236624   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:02.236699   65029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 16:07:02.236751   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:02.236881   65029 ssh_runner.go:195] Run: cat /version.json
	I0719 16:07:02.236906   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:02.239557   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.239626   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.240011   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:02.240046   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:02.240067   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.240118   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:02.240207   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:02.240372   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:02.240457   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:02.240560   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:02.240636   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:02.240764   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:02.240774   65029 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa Username:docker}
	I0719 16:07:02.240881   65029 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa Username:docker}
	I0719 16:07:02.337474   65029 ssh_runner.go:195] Run: systemctl --version
	I0719 16:07:02.343967   65029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 16:07:02.502345   65029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 16:07:02.508725   65029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 16:07:02.508808   65029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 16:07:02.525999   65029 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 16:07:02.526024   65029 start.go:495] detecting cgroup driver to use...
	I0719 16:07:02.526081   65029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 16:07:02.542999   65029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 16:07:02.557730   65029 docker.go:217] disabling cri-docker service (if available) ...
	I0719 16:07:02.557785   65029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 16:07:02.573053   65029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 16:07:02.588009   65029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 16:07:02.714620   65029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 16:07:02.849189   65029 docker.go:233] disabling docker service ...
	I0719 16:07:02.849266   65029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 16:07:02.864723   65029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 16:07:02.878599   65029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 16:07:03.030400   65029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 16:07:03.165161   65029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 16:07:03.181404   65029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 16:07:03.201021   65029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 16:07:03.201088   65029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.212410   65029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 16:07:03.212483   65029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.223653   65029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.234657   65029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.245908   65029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 16:07:03.257357   65029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.269611   65029 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.289367   65029 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 16:07:03.301601   65029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 16:07:03.312473   65029 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 16:07:03.312567   65029 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 16:07:03.326115   65029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 16:07:03.335915   65029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:07:03.467392   65029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 16:07:03.607977   65029 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 16:07:03.608044   65029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 16:07:03.613109   65029 start.go:563] Will wait 60s for crictl version
	I0719 16:07:03.613177   65029 ssh_runner.go:195] Run: which crictl
	I0719 16:07:03.617381   65029 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 16:07:03.661873   65029 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 16:07:03.661957   65029 ssh_runner.go:195] Run: crio --version
	I0719 16:07:03.692916   65029 ssh_runner.go:195] Run: crio --version
	I0719 16:07:03.727777   65029 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 16:07:03.728833   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetIP
	I0719 16:07:03.732044   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:03.732384   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:03.732413   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:03.732613   65029 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 16:07:03.736797   65029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 16:07:03.752738   65029 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0719 16:07:03.754009   65029 kubeadm.go:883] updating cluster {Name:newest-cni-850417 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-850417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 16:07:03.754155   65029 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 16:07:03.754228   65029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 16:07:03.790891   65029 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 16:07:03.790974   65029 ssh_runner.go:195] Run: which lz4
	I0719 16:07:03.795343   65029 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 16:07:03.799852   65029 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 16:07:03.799885   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0719 16:07:05.254076   65029 crio.go:462] duration metric: took 1.458762857s to copy over tarball
	I0719 16:07:05.254169   65029 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 16:07:07.356815   65029 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.102610176s)
	I0719 16:07:07.356842   65029 crio.go:469] duration metric: took 2.102734743s to extract the tarball
	I0719 16:07:07.356850   65029 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 16:07:07.394046   65029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 16:07:07.446172   65029 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 16:07:07.446194   65029 cache_images.go:84] Images are preloaded, skipping loading
	I0719 16:07:07.446201   65029 kubeadm.go:934] updating node { 192.168.50.198 8443 v1.31.0-beta.0 crio true true} ...
	I0719 16:07:07.446328   65029 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-850417 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-850417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 16:07:07.446409   65029 ssh_runner.go:195] Run: crio config
	I0719 16:07:07.503838   65029 cni.go:84] Creating CNI manager for ""
	I0719 16:07:07.503864   65029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 16:07:07.503873   65029 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0719 16:07:07.503900   65029 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.198 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-850417 NodeName:newest-cni-850417 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.50.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 16:07:07.504016   65029 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-850417"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 16:07:07.504069   65029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 16:07:07.514725   65029 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 16:07:07.514804   65029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 16:07:07.525622   65029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0719 16:07:07.545378   65029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 16:07:07.562794   65029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0719 16:07:07.580685   65029 ssh_runner.go:195] Run: grep 192.168.50.198	control-plane.minikube.internal$ /etc/hosts
	I0719 16:07:07.584915   65029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 16:07:07.598946   65029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:07:07.726713   65029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 16:07:07.744995   65029 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417 for IP: 192.168.50.198
	I0719 16:07:07.745028   65029 certs.go:194] generating shared ca certs ...
	I0719 16:07:07.745047   65029 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:07.745195   65029 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 16:07:07.745231   65029 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 16:07:07.745240   65029 certs.go:256] generating profile certs ...
	I0719 16:07:07.745285   65029 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/client.key
	I0719 16:07:07.745298   65029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/client.crt with IP's: []
	I0719 16:07:07.872620   65029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/client.crt ...
	I0719 16:07:07.872654   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/client.crt: {Name:mkc4b674e8f9312052efd644ef4fd204b7a36fd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:07.872818   65029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/client.key ...
	I0719 16:07:07.872829   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/client.key: {Name:mk5b6dec041b1b1d2696a34eec75a141943fad7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:07.872901   65029 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.key.a5742b17
	I0719 16:07:07.872923   65029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.crt.a5742b17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.198]
	I0719 16:07:08.194814   65029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.crt.a5742b17 ...
	I0719 16:07:08.194843   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.crt.a5742b17: {Name:mk19eca4c636534e3b91a2961aa184aec1bddff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:08.195055   65029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.key.a5742b17 ...
	I0719 16:07:08.195075   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.key.a5742b17: {Name:mk2b6150c263d7f06ccbd83d000e6bf9add0340b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:08.195212   65029 certs.go:381] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.crt.a5742b17 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.crt
	I0719 16:07:08.195321   65029 certs.go:385] copying /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.key.a5742b17 -> /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.key
	I0719 16:07:08.195407   65029 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.key
	I0719 16:07:08.195430   65029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.crt with IP's: []
	I0719 16:07:08.605487   65029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.crt ...
	I0719 16:07:08.605528   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.crt: {Name:mkd4f4f1121a07f46d5a9a2e9fd7221e7b8813ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:08.605735   65029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.key ...
	I0719 16:07:08.605758   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.key: {Name:mk69a52d060d48c253f3c575ed9c6f7014d0cddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:08.605968   65029 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 16:07:08.606008   65029 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 16:07:08.606019   65029 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 16:07:08.606039   65029 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 16:07:08.606061   65029 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 16:07:08.606081   65029 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 16:07:08.606146   65029 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 16:07:08.606717   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 16:07:08.657945   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 16:07:08.706303   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 16:07:08.738831   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 16:07:08.765810   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 16:07:08.793422   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 16:07:08.818420   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 16:07:08.842820   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/newest-cni-850417/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 16:07:08.868254   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 16:07:08.893829   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 16:07:08.919786   65029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 16:07:08.947140   65029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 16:07:08.967112   65029 ssh_runner.go:195] Run: openssl version
	I0719 16:07:08.973221   65029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 16:07:08.986952   65029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 16:07:08.992318   65029 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 16:07:08.992393   65029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 16:07:08.999807   65029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 16:07:09.014946   65029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 16:07:09.027804   65029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 16:07:09.033058   65029 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 16:07:09.033121   65029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 16:07:09.039423   65029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 16:07:09.051769   65029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 16:07:09.064743   65029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:07:09.069519   65029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:07:09.069594   65029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:07:09.075882   65029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 16:07:09.087741   65029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 16:07:09.092342   65029 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 16:07:09.092395   65029 kubeadm.go:392] StartCluster: {Name:newest-cni-850417 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-850417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 16:07:09.092484   65029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 16:07:09.092523   65029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 16:07:09.146056   65029 cri.go:89] found id: ""
	I0719 16:07:09.146137   65029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 16:07:09.157671   65029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 16:07:09.169991   65029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 16:07:09.181386   65029 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 16:07:09.181410   65029 kubeadm.go:157] found existing configuration files:
	
	I0719 16:07:09.181463   65029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 16:07:09.192837   65029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 16:07:09.192913   65029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 16:07:09.204911   65029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 16:07:09.215359   65029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 16:07:09.215429   65029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 16:07:09.227560   65029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 16:07:09.238604   65029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 16:07:09.238662   65029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 16:07:09.250155   65029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 16:07:09.261286   65029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 16:07:09.261358   65029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 16:07:09.272350   65029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 16:07:09.392540   65029 kubeadm.go:310] W0719 16:07:09.377055     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 16:07:09.393369   65029 kubeadm.go:310] W0719 16:07:09.378268     848 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 16:07:09.536611   65029 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 16:07:19.563950   65029 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 16:07:19.564028   65029 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 16:07:19.564131   65029 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 16:07:19.564266   65029 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 16:07:19.564351   65029 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 16:07:19.564404   65029 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 16:07:19.566156   65029 out.go:204]   - Generating certificates and keys ...
	I0719 16:07:19.566226   65029 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 16:07:19.566310   65029 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 16:07:19.566370   65029 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 16:07:19.566422   65029 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 16:07:19.566472   65029 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 16:07:19.566548   65029 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 16:07:19.566636   65029 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 16:07:19.566787   65029 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-850417] and IPs [192.168.50.198 127.0.0.1 ::1]
	I0719 16:07:19.566865   65029 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 16:07:19.567046   65029 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-850417] and IPs [192.168.50.198 127.0.0.1 ::1]
	I0719 16:07:19.567128   65029 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 16:07:19.567220   65029 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 16:07:19.567283   65029 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 16:07:19.567360   65029 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 16:07:19.567431   65029 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 16:07:19.567511   65029 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 16:07:19.567584   65029 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 16:07:19.567667   65029 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 16:07:19.567750   65029 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 16:07:19.567870   65029 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 16:07:19.567964   65029 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 16:07:19.569545   65029 out.go:204]   - Booting up control plane ...
	I0719 16:07:19.569654   65029 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 16:07:19.569750   65029 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 16:07:19.569833   65029 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 16:07:19.569963   65029 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 16:07:19.570073   65029 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 16:07:19.570139   65029 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 16:07:19.570331   65029 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 16:07:19.570436   65029 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 16:07:19.570519   65029 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.327086ms
	I0719 16:07:19.570634   65029 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 16:07:19.570725   65029 kubeadm.go:310] [api-check] The API server is healthy after 5.501381885s
	I0719 16:07:19.570871   65029 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 16:07:19.571038   65029 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 16:07:19.571125   65029 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 16:07:19.571348   65029 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-850417 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 16:07:19.571445   65029 kubeadm.go:310] [bootstrap-token] Using token: qgv2do.4bi4rxmivw2nh4t2
	I0719 16:07:19.572888   65029 out.go:204]   - Configuring RBAC rules ...
	I0719 16:07:19.572983   65029 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 16:07:19.573078   65029 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 16:07:19.573221   65029 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 16:07:19.573334   65029 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 16:07:19.573467   65029 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 16:07:19.573561   65029 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 16:07:19.573664   65029 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 16:07:19.573703   65029 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 16:07:19.573743   65029 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 16:07:19.573748   65029 kubeadm.go:310] 
	I0719 16:07:19.573798   65029 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 16:07:19.573806   65029 kubeadm.go:310] 
	I0719 16:07:19.573873   65029 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 16:07:19.573881   65029 kubeadm.go:310] 
	I0719 16:07:19.573905   65029 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 16:07:19.573963   65029 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 16:07:19.574005   65029 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 16:07:19.574011   65029 kubeadm.go:310] 
	I0719 16:07:19.574065   65029 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 16:07:19.574078   65029 kubeadm.go:310] 
	I0719 16:07:19.574124   65029 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 16:07:19.574130   65029 kubeadm.go:310] 
	I0719 16:07:19.574171   65029 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 16:07:19.574266   65029 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 16:07:19.574324   65029 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 16:07:19.574330   65029 kubeadm.go:310] 
	I0719 16:07:19.574404   65029 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 16:07:19.574487   65029 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 16:07:19.574499   65029 kubeadm.go:310] 
	I0719 16:07:19.574599   65029 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qgv2do.4bi4rxmivw2nh4t2 \
	I0719 16:07:19.574727   65029 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 16:07:19.574762   65029 kubeadm.go:310] 	--control-plane 
	I0719 16:07:19.574770   65029 kubeadm.go:310] 
	I0719 16:07:19.574879   65029 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 16:07:19.574893   65029 kubeadm.go:310] 
	I0719 16:07:19.575006   65029 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qgv2do.4bi4rxmivw2nh4t2 \
	I0719 16:07:19.575157   65029 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 16:07:19.575174   65029 cni.go:84] Creating CNI manager for ""
	I0719 16:07:19.575184   65029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 16:07:19.576716   65029 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 16:07:19.577979   65029 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 16:07:19.590275   65029 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 16:07:19.609676   65029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 16:07:19.609707   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:19.609719   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-850417 minikube.k8s.io/updated_at=2024_07_19T16_07_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=newest-cni-850417 minikube.k8s.io/primary=true
	I0719 16:07:19.831031   65029 ops.go:34] apiserver oom_adj: -16
	I0719 16:07:19.831210   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:20.331611   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:20.832068   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:21.331459   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:21.832025   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:22.331963   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:22.831934   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:23.331406   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:23.831727   65029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:07:24.023188   65029 kubeadm.go:1113] duration metric: took 4.413536302s to wait for elevateKubeSystemPrivileges
	I0719 16:07:24.023236   65029 kubeadm.go:394] duration metric: took 14.930842939s to StartCluster
	I0719 16:07:24.023260   65029 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:24.023347   65029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 16:07:24.025679   65029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:24.025952   65029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 16:07:24.025957   65029 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.198 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 16:07:24.026064   65029 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 16:07:24.026129   65029 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-850417"
	I0719 16:07:24.026147   65029 config.go:182] Loaded profile config "newest-cni-850417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 16:07:24.026159   65029 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-850417"
	I0719 16:07:24.026161   65029 addons.go:69] Setting default-storageclass=true in profile "newest-cni-850417"
	I0719 16:07:24.026192   65029 host.go:66] Checking if "newest-cni-850417" exists ...
	I0719 16:07:24.026195   65029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-850417"
	I0719 16:07:24.026602   65029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:07:24.026603   65029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:07:24.026630   65029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:07:24.026633   65029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:07:24.028459   65029 out.go:177] * Verifying Kubernetes components...
	I0719 16:07:24.030256   65029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:07:24.042587   65029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0719 16:07:24.042772   65029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0719 16:07:24.042985   65029 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:07:24.043160   65029 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:07:24.043426   65029 main.go:141] libmachine: Using API Version  1
	I0719 16:07:24.043447   65029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:07:24.043552   65029 main.go:141] libmachine: Using API Version  1
	I0719 16:07:24.043568   65029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:07:24.043930   65029 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:07:24.044040   65029 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:07:24.044135   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetState
	I0719 16:07:24.044566   65029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:07:24.044998   65029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:07:24.053124   65029 addons.go:234] Setting addon default-storageclass=true in "newest-cni-850417"
	I0719 16:07:24.053166   65029 host.go:66] Checking if "newest-cni-850417" exists ...
	I0719 16:07:24.053549   65029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:07:24.053581   65029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:07:24.061612   65029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39763
	I0719 16:07:24.063140   65029 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:07:24.063639   65029 main.go:141] libmachine: Using API Version  1
	I0719 16:07:24.063652   65029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:07:24.063992   65029 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:07:24.064132   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetState
	I0719 16:07:24.066306   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:24.068464   65029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:07:24.069754   65029 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:07:24.069769   65029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 16:07:24.069787   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:24.071579   65029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 16:07:24.071976   65029 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:07:24.072654   65029 main.go:141] libmachine: Using API Version  1
	I0719 16:07:24.072668   65029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:07:24.073183   65029 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:07:24.074193   65029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:07:24.074247   65029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:07:24.074461   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:24.074485   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:24.074504   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:24.074516   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:24.074628   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:24.074976   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:24.075238   65029 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa Username:docker}
	I0719 16:07:24.090035   65029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0719 16:07:24.090649   65029 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:07:24.091140   65029 main.go:141] libmachine: Using API Version  1
	I0719 16:07:24.091166   65029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:07:24.091459   65029 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:07:24.091650   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetState
	I0719 16:07:24.093405   65029 main.go:141] libmachine: (newest-cni-850417) Calling .DriverName
	I0719 16:07:24.093656   65029 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 16:07:24.093672   65029 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 16:07:24.093840   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHHostname
	I0719 16:07:24.096771   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:24.097181   65029 main.go:141] libmachine: (newest-cni-850417) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:53:78", ip: ""} in network mk-newest-cni-850417: {Iface:virbr4 ExpiryTime:2024-07-19 17:06:51 +0000 UTC Type:0 Mac:52:54:00:59:53:78 Iaid: IPaddr:192.168.50.198 Prefix:24 Hostname:newest-cni-850417 Clientid:01:52:54:00:59:53:78}
	I0719 16:07:24.097212   65029 main.go:141] libmachine: (newest-cni-850417) DBG | domain newest-cni-850417 has defined IP address 192.168.50.198 and MAC address 52:54:00:59:53:78 in network mk-newest-cni-850417
	I0719 16:07:24.097367   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHPort
	I0719 16:07:24.097532   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHKeyPath
	I0719 16:07:24.097810   65029 main.go:141] libmachine: (newest-cni-850417) Calling .GetSSHUsername
	I0719 16:07:24.098036   65029 sshutil.go:53] new ssh client: &{IP:192.168.50.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/newest-cni-850417/id_rsa Username:docker}
	I0719 16:07:24.314312   65029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 16:07:24.314349   65029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 16:07:24.432318   65029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:07:24.450177   65029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 16:07:25.167039   65029 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0719 16:07:25.168409   65029 api_server.go:52] waiting for apiserver process to appear ...
	I0719 16:07:25.168469   65029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 16:07:25.650648   65029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.218290244s)
	I0719 16:07:25.650713   65029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.200501798s)
	I0719 16:07:25.650797   65029 main.go:141] libmachine: Making call to close driver server
	I0719 16:07:25.650760   65029 main.go:141] libmachine: Making call to close driver server
	I0719 16:07:25.650838   65029 main.go:141] libmachine: (newest-cni-850417) Calling .Close
	I0719 16:07:25.650760   65029 api_server.go:72] duration metric: took 1.624773633s to wait for apiserver process to appear ...
	I0719 16:07:25.650897   65029 api_server.go:88] waiting for apiserver healthz status ...
	I0719 16:07:25.650917   65029 api_server.go:253] Checking apiserver healthz at https://192.168.50.198:8443/healthz ...
	I0719 16:07:25.650814   65029 main.go:141] libmachine: (newest-cni-850417) Calling .Close
	I0719 16:07:25.651220   65029 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:07:25.651235   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Closing plugin on server side
	I0719 16:07:25.651236   65029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:07:25.651248   65029 main.go:141] libmachine: Making call to close driver server
	I0719 16:07:25.651255   65029 main.go:141] libmachine: (newest-cni-850417) Calling .Close
	I0719 16:07:25.651259   65029 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:07:25.651272   65029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:07:25.651281   65029 main.go:141] libmachine: Making call to close driver server
	I0719 16:07:25.651299   65029 main.go:141] libmachine: (newest-cni-850417) Calling .Close
	I0719 16:07:25.651530   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Closing plugin on server side
	I0719 16:07:25.651543   65029 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:07:25.651556   65029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:07:25.651557   65029 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:07:25.651566   65029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:07:25.651577   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Closing plugin on server side
	I0719 16:07:25.667215   65029 api_server.go:279] https://192.168.50.198:8443/healthz returned 200:
	ok
	I0719 16:07:25.668500   65029 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 16:07:25.668532   65029 api_server.go:131] duration metric: took 17.626398ms to wait for apiserver health ...
	I0719 16:07:25.668544   65029 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 16:07:25.692043   65029 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-850417" context rescaled to 1 replicas
	I0719 16:07:25.692113   65029 main.go:141] libmachine: Making call to close driver server
	I0719 16:07:25.692138   65029 main.go:141] libmachine: (newest-cni-850417) Calling .Close
	I0719 16:07:25.692472   65029 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:07:25.692549   65029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:07:25.692521   65029 main.go:141] libmachine: (newest-cni-850417) DBG | Closing plugin on server side
	I0719 16:07:25.695968   65029 system_pods.go:59] 8 kube-system pods found
	I0719 16:07:25.696010   65029 system_pods.go:61] "coredns-5cfdc65f69-5cbh2" [ded17c98-a5a4-418b-8781-b9748f353557] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 16:07:25.696020   65029 system_pods.go:61] "coredns-5cfdc65f69-vrp5p" [9381e03a-6e5d-49e5-a58c-7e97a6a9d566] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 16:07:25.696030   65029 system_pods.go:61] "etcd-newest-cni-850417" [3793ac5c-e4df-49ce-aef4-c61f5d1e1d39] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 16:07:25.696042   65029 system_pods.go:61] "kube-apiserver-newest-cni-850417" [233314d1-26cb-4a60-a5fc-604179f11774] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 16:07:25.696054   65029 system_pods.go:61] "kube-controller-manager-newest-cni-850417" [f4e922c9-6b5d-4364-b49b-3f0b14d4b08c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 16:07:25.696061   65029 system_pods.go:61] "kube-proxy-nbbsv" [64e00701-a32b-4030-9ecb-10d46c88da8f] Running
	I0719 16:07:25.696071   65029 system_pods.go:61] "kube-scheduler-newest-cni-850417" [597d909a-d2ec-4c09-bdbd-30a2cf1ed834] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 16:07:25.696079   65029 system_pods.go:61] "storage-provisioner" [09ff084a-c880-46d1-b2cd-9a5cdb5c8e37] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 16:07:25.696088   65029 system_pods.go:74] duration metric: took 27.536492ms to wait for pod list to return data ...
	I0719 16:07:25.696098   65029 default_sa.go:34] waiting for default service account to be created ...
	I0719 16:07:25.697407   65029 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 16:07:25.698860   65029 addons.go:510] duration metric: took 1.672791844s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 16:07:25.705989   65029 default_sa.go:45] found service account: "default"
	I0719 16:07:25.706015   65029 default_sa.go:55] duration metric: took 9.909854ms for default service account to be created ...
	I0719 16:07:25.706030   65029 kubeadm.go:582] duration metric: took 1.680041806s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 16:07:25.706051   65029 node_conditions.go:102] verifying NodePressure condition ...
	I0719 16:07:25.713913   65029 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 16:07:25.713945   65029 node_conditions.go:123] node cpu capacity is 2
	I0719 16:07:25.713959   65029 node_conditions.go:105] duration metric: took 7.898793ms to run NodePressure ...
	I0719 16:07:25.713972   65029 start.go:241] waiting for startup goroutines ...
	I0719 16:07:25.713980   65029 start.go:246] waiting for cluster config update ...
	I0719 16:07:25.713994   65029 start.go:255] writing updated cluster config ...
	I0719 16:07:25.714303   65029 ssh_runner.go:195] Run: rm -f paused
	I0719 16:07:25.768092   65029 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 16:07:25.770092   65029 out.go:177] * Done! kubectl is now configured to use "newest-cni-850417" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.674070256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405247674041775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61908780-da89-417f-b52b-73caa1c993bb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.674676569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aefa9f31-b000-4ec0-81d9-32921036d9e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.674750147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aefa9f31-b000-4ec0-81d9-32921036d9e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.675072973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aefa9f31-b000-4ec0-81d9-32921036d9e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.724998689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d897e5f-955a-47dc-9074-0280bb22235c name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.725078430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d897e5f-955a-47dc-9074-0280bb22235c name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.726122720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6f9cc54-af39-4b82-a522-9581edddeb55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.726503550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405247726475980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6f9cc54-af39-4b82-a522-9581edddeb55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.727397814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1447d62e-e555-4b24-b353-5c0355f5daa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.727602613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1447d62e-e555-4b24-b353-5c0355f5daa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.728161642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1447d62e-e555-4b24-b353-5c0355f5daa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.766059619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95f74779-dc2c-4bb6-b469-1cbbf5ccb091 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.766152786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95f74779-dc2c-4bb6-b469-1cbbf5ccb091 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.767344325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7b050ea-1f96-412c-8e68-a7dadefed24d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.767692246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405247767670959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7b050ea-1f96-412c-8e68-a7dadefed24d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.768162644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a56a946-0e1c-4ebc-a049-60a99abbf0d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.768248255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a56a946-0e1c-4ebc-a049-60a99abbf0d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.768447061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a56a946-0e1c-4ebc-a049-60a99abbf0d0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.803298202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e7de3b2-0e5f-4c41-9391-a8d7d263c722 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.803384884Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e7de3b2-0e5f-4c41-9391-a8d7d263c722 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.804778669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bc0622f-50a3-41dc-8ec8-edc934bc6804 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.805192649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405247805160756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bc0622f-50a3-41dc-8ec8-edc934bc6804 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.805688399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4a26108-4430-4ad5-bcd3-2f364114a7e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.805745682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4a26108-4430-4ad5-bcd3-2f364114a7e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:27 no-preload-382231 crio[724]: time="2024-07-19 16:07:27.805982627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2,PodSandboxId:5f1258a23c752cd06752d71b1be1bf538b9cd64269731b7319a64b56bde3a3e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721404355492679299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qd84x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73ebfa49-3a5a-44c0-948a-233d7a147bdd,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad,PodSandboxId:8eb60a9774a1e7e5c62f3bca17c851dcc3f771018c578b2b08998416160e5f53,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404355051355072,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-zk22p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03dcb169-2796-4dbd-8ccf-383e07d90b44,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3,PodSandboxId:fa6d4aff4f662857a77ee112ffae6e3dd3705c8e385dc36dc0f42d539842bfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404354952676968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91cc
f728-07fe-4b05-823e-513e1a3c3505,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636,PodSandboxId:1f535cc47b65eb50bda0de8e22b3f19664f60c6d749e6124bdadc4694df0e5db,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404354949247180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4xxpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ff50d32-70e5-4821-b161-9c0bf4de6a2
a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886,PodSandboxId:593fe20aa2d267764f3b1bc14bc38ab974730537bc6907971eadd3c0ff553376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721404342758327457,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7b38d26672b62bb816126c9f441cb57,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6,PodSandboxId:9e09562f0bff097f77a905add4e4cfb6b7e251de9a56f85d2eb3de7f10d790bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721404342774128780,Labels:map[string]string{io.kubernetes.conta
iner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca25a81f8acd688a83c4a693448aee56,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0,PodSandboxId:39aae21812130193027a3fe3e8bbfcc69575a3fc2c3109e27d1ceb7082968aaf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721404342743293934,Labels:map[string]string{io.kubernetes.
container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec,PodSandboxId:2ad839947a22d3ef58b169456e10bd010b50f152c243d9c3585fc222d8edc9d9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721404342702122793,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d809355d9f2059acd14cb3c4ca683bf6,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd,PodSandboxId:29d6eb42a98c6e5012d5fd138ad46fb6d259451c957fc1c7d85f338de55ef6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721404058281120556,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-382231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dfa77447d322f92fd32b9367ddb48b21,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4a26108-4430-4ad5-bcd3-2f364114a7e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4eda5dba755ba       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   5f1258a23c752       kube-proxy-qd84x
	1f64c9c744fd5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   8eb60a9774a1e       coredns-5cfdc65f69-zk22p
	936b78e859523       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   fa6d4aff4f662       storage-provisioner
	ec70406cf2daf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   1f535cc47b65e       coredns-5cfdc65f69-4xxpm
	29cd3efbe0958       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 minutes ago      Running             kube-controller-manager   2                   9e09562f0bff0       kube-controller-manager-no-preload-382231
	16192e7348afc       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 minutes ago      Running             etcd                      2                   593fe20aa2d26       etcd-no-preload-382231
	1aed6ff1362a0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   15 minutes ago      Running             kube-apiserver            2                   39aae21812130       kube-apiserver-no-preload-382231
	76f6c5f0c8688       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 minutes ago      Running             kube-scheduler            2                   2ad839947a22d       kube-scheduler-no-preload-382231
	a15608be38472       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   29d6eb42a98c6       kube-apiserver-no-preload-382231
	
	
	==> coredns [1f64c9c744fd5b6a4770814df7f8a06aff460d374b9c759709a4749d3a6230ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ec70406cf2daf16b4b5260289f6dbe5e444b2ee4b88184dc81b0f944eb580636] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-382231
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-382231
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=no-preload-382231
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:52:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-382231
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 16:07:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 16:02:51 +0000   Fri, 19 Jul 2024 15:52:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 16:02:51 +0000   Fri, 19 Jul 2024 15:52:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 16:02:51 +0000   Fri, 19 Jul 2024 15:52:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 16:02:51 +0000   Fri, 19 Jul 2024 15:52:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    no-preload-382231
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 691bf3048c134d3b99ae1d3b2842df38
	  System UUID:                691bf304-8c13-4d3b-99ae-1d3b2842df38
	  Boot ID:                    39770819-d2fb-48d1-b593-69c126cb1da9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-4xxpm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-zk22p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-382231                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-382231             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-382231    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-qd84x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-382231             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-78fcd8795b-rc6ft              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-382231 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-382231 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-382231 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-382231 event: Registered Node no-preload-382231 in Controller
	
	
	==> dmesg <==
	[  +0.050714] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040119] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.535494] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.364388] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.560857] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.917312] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.058776] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062717] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.180281] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.154015] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.289269] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[ +14.949713] systemd-fstab-generator[1173]: Ignoring "noauto" option for root device
	[  +0.061134] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.954688] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +5.633631] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.459628] kauditd_printk_skb: 86 callbacks suppressed
	[Jul19 15:52] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.079872] systemd-fstab-generator[2941]: Ignoring "noauto" option for root device
	[  +4.396966] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.672565] systemd-fstab-generator[3271]: Ignoring "noauto" option for root device
	[  +5.404903] systemd-fstab-generator[3387]: Ignoring "noauto" option for root device
	[  +0.142154] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.122564] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [16192e7348afc92bdcac1369936002b25206ce7f1043859175e780c62c0f1886] <==
	{"level":"info","ts":"2024-07-19T15:52:23.40032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc became leader at term 2"}
	{"level":"info","ts":"2024-07-19T15:52:23.40033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bcb2eab2b5d0a9fc elected leader bcb2eab2b5d0a9fc at term 2"}
	{"level":"info","ts":"2024-07-19T15:52:23.405328Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bcb2eab2b5d0a9fc","local-member-attributes":"{Name:no-preload-382231 ClientURLs:[https://192.168.39.227:2379]}","request-path":"/0/members/bcb2eab2b5d0a9fc/attributes","cluster-id":"a9051c714e34311b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:52:23.40548Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:52:23.405691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:52:23.406266Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.409456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:52:23.418152Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:52:23.412207Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:52:23.414288Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T15:52:23.417905Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a9051c714e34311b","local-member-id":"bcb2eab2b5d0a9fc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.420995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.421049Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:52:23.42594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T15:52:23.430912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.227:2379"}
	{"level":"info","ts":"2024-07-19T16:02:23.789119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-07-19T16:02:23.799705Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":684,"took":"10.197113ms","hash":2473783565,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-19T16:02:23.799777Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2473783565,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T16:07:10.397594Z","caller":"traceutil/trace.go:171","msg":"trace[1355212193] linearizableReadLoop","detail":"{readStateIndex:1350; appliedIndex:1349; }","duration":"137.087323ms","start":"2024-07-19T16:07:10.260461Z","end":"2024-07-19T16:07:10.397549Z","steps":["trace[1355212193] 'read index received'  (duration: 136.891471ms)","trace[1355212193] 'applied index is now lower than readState.Index'  (duration: 195.287µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T16:07:10.398135Z","caller":"traceutil/trace.go:171","msg":"trace[1258508416] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"238.716784ms","start":"2024-07-19T16:07:10.159375Z","end":"2024-07-19T16:07:10.398092Z","steps":["trace[1258508416] 'process raft request'  (duration: 238.086659ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T16:07:10.398423Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.89021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T16:07:10.398504Z","caller":"traceutil/trace.go:171","msg":"trace[1518521900] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1160; }","duration":"138.037199ms","start":"2024-07-19T16:07:10.260457Z","end":"2024-07-19T16:07:10.398494Z","steps":["trace[1518521900] 'agreement among raft nodes before linearized reading'  (duration: 137.866986ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T16:07:23.799158Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-07-19T16:07:23.803693Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"3.777871ms","hash":1223744924,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-19T16:07:23.803835Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1223744924,"revision":927,"compact-revision":684}
	
	
	==> kernel <==
	 16:07:28 up 20 min,  0 users,  load average: 0.41, 0.23, 0.18
	Linux no-preload-382231 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1aed6ff1362a0360b88d9613ff75eea04da4736d205be30b5507e3456c5810c0] <==
	I0719 16:03:26.601486       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 16:03:26.601560       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:05:26.602001       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:05:26.602146       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0719 16:05:26.602021       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:05:26.602333       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0719 16:05:26.603581       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 16:05:26.603615       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:07:25.604647       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:07:25.604934       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 16:07:26.607382       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:07:26.607518       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0719 16:07:26.607530       1 handler_proxy.go:99] no RequestInfo found in the context
	E0719 16:07:26.607588       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0719 16:07:26.608842       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 16:07:26.608850       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a15608be38472524f43486dfad856671f7704388b17777f0d7a8d7eb259778fd] <==
	W0719 15:52:18.584607       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.592444       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.610079       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.636080       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.738917       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.743531       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.769728       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.837400       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.854019       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.886602       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.890359       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.912666       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.928266       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.947783       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.953953       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:18.971158       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.044287       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.086349       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.147043       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.224235       1 logging.go:55] [core] [Channel #15 SubChannel #17]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.304513       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.456622       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.470563       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.571373       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 15:52:19.628231       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [29cd3efbe09588da4f1d583361fbd5c398bd42f937a5b64c92961ae19b7976a6] <==
	E0719 16:02:03.584093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:02:03.678599       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:02:33.591607       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:02:33.687740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 16:02:51.517998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-382231"
	E0719 16:03:03.598566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:03:03.697914       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:03:33.606501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:03:33.706927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 16:03:46.109581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="208.777µs"
	I0719 16:04:00.109343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="72.841µs"
	E0719 16:04:03.613258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:04:03.718930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:04:33.621092       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:04:33.727433       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:05:03.628124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:05:03.736599       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:05:33.634759       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:05:33.745253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:06:03.643276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:06:03.765418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:06:33.651161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:06:33.784089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:07:03.658604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0719 16:07:03.793787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4eda5dba755ba2f2580ab5fd45dc8144b1c353b55d904b7dbd50bf92347ed7e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 15:52:35.719251       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 15:52:35.730719       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0719 15:52:35.730841       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 15:52:35.772456       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 15:52:35.772527       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:52:35.772562       1 server_linux.go:170] "Using iptables Proxier"
	I0719 15:52:35.775861       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 15:52:35.776204       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 15:52:35.776232       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:52:35.777848       1 config.go:197] "Starting service config controller"
	I0719 15:52:35.777884       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:52:35.777932       1 config.go:104] "Starting endpoint slice config controller"
	I0719 15:52:35.777963       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:52:35.778887       1 config.go:326] "Starting node config controller"
	I0719 15:52:35.778923       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:52:35.879060       1 shared_informer.go:320] Caches are synced for node config
	I0719 15:52:35.879114       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:52:35.879208       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [76f6c5f0c8688be880902fd6b89578e4bdbc9c6aea26750789f7819fdbd791ec] <==
	E0719 15:52:25.675760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.675046       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:25.676246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0719 15:52:25.675542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 15:52:25.678069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 15:52:25.678163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:25.678243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:25.678282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 15:52:25.678310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.608027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 15:52:26.609114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.653080       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:26.653204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.658985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 15:52:26.659597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.799971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 15:52:26.800205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.862930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 15:52:26.863051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 15:52:26.943090       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 15:52:26.943214       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0719 15:52:28.766884       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 16:05:23 no-preload-382231 kubelet[3278]: E0719 16:05:23.088461    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:05:28 no-preload-382231 kubelet[3278]: E0719 16:05:28.163316    3278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:05:28 no-preload-382231 kubelet[3278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:05:28 no-preload-382231 kubelet[3278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:05:28 no-preload-382231 kubelet[3278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:05:28 no-preload-382231 kubelet[3278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:05:36 no-preload-382231 kubelet[3278]: E0719 16:05:36.088860    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:05:48 no-preload-382231 kubelet[3278]: E0719 16:05:48.089103    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:06:03 no-preload-382231 kubelet[3278]: E0719 16:06:03.088754    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:06:18 no-preload-382231 kubelet[3278]: E0719 16:06:18.088280    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:06:28 no-preload-382231 kubelet[3278]: E0719 16:06:28.162733    3278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:06:28 no-preload-382231 kubelet[3278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:06:28 no-preload-382231 kubelet[3278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:06:28 no-preload-382231 kubelet[3278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:06:28 no-preload-382231 kubelet[3278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:06:30 no-preload-382231 kubelet[3278]: E0719 16:06:30.089617    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:06:41 no-preload-382231 kubelet[3278]: E0719 16:06:41.088709    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:06:52 no-preload-382231 kubelet[3278]: E0719 16:06:52.088990    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:07:04 no-preload-382231 kubelet[3278]: E0719 16:07:04.088729    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:07:18 no-preload-382231 kubelet[3278]: E0719 16:07:18.092027    3278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-rc6ft" podUID="5348ffd6-5e80-4533-bc25-3dcd08c43ff4"
	Jul 19 16:07:28 no-preload-382231 kubelet[3278]: E0719 16:07:28.161965    3278 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:07:28 no-preload-382231 kubelet[3278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:07:28 no-preload-382231 kubelet[3278]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:07:28 no-preload-382231 kubelet[3278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:07:28 no-preload-382231 kubelet[3278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [936b78e8595233311ea914d1e0d409b3c341fd7d1c084a9c16c1c3c24dc3e8a3] <==
	I0719 15:52:35.304179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:52:35.390334       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:52:35.400247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 15:52:35.423889       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 15:52:35.424314       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-382231_b3e9a515-3fb8-4ff8-876f-51547a216032!
	I0719 15:52:35.427968       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b4ed317-2d3b-4008-a7e3-0badc1e15741", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-382231_b3e9a515-3fb8-4ff8-876f-51547a216032 became leader
	I0719 15:52:35.528143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-382231_b3e9a515-3fb8-4ff8-876f-51547a216032!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-382231 -n no-preload-382231
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-382231 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-rc6ft
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-382231 describe pod metrics-server-78fcd8795b-rc6ft
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-382231 describe pod metrics-server-78fcd8795b-rc6ft: exit status 1 (60.072801ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-rc6ft" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-382231 describe pod metrics-server-78fcd8795b-rc6ft: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (340.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (528s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-19 16:10:34.492076728 +0000 UTC m=+6609.287663268
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.661µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-601445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-601445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-601445 logs -n 25: (1.334878218s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | systemctl cat kubelet                                |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | journalctl -xeu kubelet --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC |                     |
	|         | systemctl status docker --all                        |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | systemctl cat docker                                 |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo docker                        | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC |                     |
	|         | systemctl status cri-docker                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | systemctl cat cri-docker                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC |                     |
	|         | systemctl status containerd                          |                |         |         |                     |                     |
	|         | --all --full --no-pager                              |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | systemctl cat containerd                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo cat                           | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | containerd config dump                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | systemctl status crio --all                          |                |         |         |                     |                     |
	|         | --full --no-pager                                    |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo                               | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | systemctl cat crio --no-pager                        |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo find                          | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p kindnet-526259 sudo crio                          | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p kindnet-526259                                    | kindnet-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	| start   | -p bridge-526259 --memory=3072                       | bridge-526259  | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p flannel-526259 pgrep -a                           | flannel-526259 | jenkins | v1.33.1 | 19 Jul 24 16:10 UTC | 19 Jul 24 16:10 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 16:10:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:10:11.379282   70654 out.go:291] Setting OutFile to fd 1 ...
	I0719 16:10:11.379563   70654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 16:10:11.379573   70654 out.go:304] Setting ErrFile to fd 2...
	I0719 16:10:11.379578   70654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 16:10:11.379853   70654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 16:10:11.380432   70654 out.go:298] Setting JSON to false
	I0719 16:10:11.381561   70654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6757,"bootTime":1721398654,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 16:10:11.381619   70654 start.go:139] virtualization: kvm guest
	I0719 16:10:11.383764   70654 out.go:177] * [bridge-526259] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 16:10:11.385476   70654 notify.go:220] Checking for updates...
	I0719 16:10:11.385480   70654 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 16:10:11.386851   70654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:10:11.388124   70654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 16:10:11.389376   70654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:10:11.390572   70654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 16:10:11.391819   70654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:10:11.393327   70654 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:10:11.393413   70654 config.go:182] Loaded profile config "enable-default-cni-526259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:10:11.393497   70654 config.go:182] Loaded profile config "flannel-526259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:10:11.393567   70654 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 16:10:11.432496   70654 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 16:10:11.433723   70654 start.go:297] selected driver: kvm2
	I0719 16:10:11.433742   70654 start.go:901] validating driver "kvm2" against <nil>
	I0719 16:10:11.433753   70654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:10:11.434465   70654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:10:11.434569   70654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 16:10:11.451138   70654 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 16:10:11.451201   70654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 16:10:11.451473   70654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:10:11.451504   70654 cni.go:84] Creating CNI manager for "bridge"
	I0719 16:10:11.451513   70654 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:10:11.451578   70654 start.go:340] cluster config:
	{Name:bridge-526259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-526259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 16:10:11.451750   70654 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:10:11.454270   70654 out.go:177] * Starting "bridge-526259" primary control-plane node in "bridge-526259" cluster
	I0719 16:10:11.455947   70654 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 16:10:11.456000   70654 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 16:10:11.456008   70654 cache.go:56] Caching tarball of preloaded images
	I0719 16:10:11.456113   70654 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 16:10:11.456129   70654 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 16:10:11.456223   70654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/bridge-526259/config.json ...
	I0719 16:10:11.456241   70654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/bridge-526259/config.json: {Name:mk1bf73640480f429868c7fdb9bd554fa174bda5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:10:11.456400   70654 start.go:360] acquireMachinesLock for bridge-526259: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:10:11.456434   70654 start.go:364] duration metric: took 18.515µs to acquireMachinesLock for "bridge-526259"
	I0719 16:10:11.456457   70654 start.go:93] Provisioning new machine with config: &{Name:bridge-526259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:bridge-526259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 16:10:11.456522   70654 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 16:10:13.497598   68889 kubeadm.go:310] [api-check] The API server is healthy after 5.501917998s
	I0719 16:10:13.516532   68889 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 16:10:13.532474   68889 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 16:10:13.555958   68889 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 16:10:13.556215   68889 kubeadm.go:310] [mark-control-plane] Marking the node enable-default-cni-526259 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 16:10:13.568498   68889 kubeadm.go:310] [bootstrap-token] Using token: 2h6ho7.vyqv5d9xpmj07qzm
	I0719 16:10:10.846799   67244 node_ready.go:53] node "flannel-526259" has status "Ready":"False"
	I0719 16:10:11.846631   67244 node_ready.go:49] node "flannel-526259" has status "Ready":"True"
	I0719 16:10:11.846655   67244 node_ready.go:38] duration metric: took 12.503388407s for node "flannel-526259" to be "Ready" ...
	I0719 16:10:11.846666   67244 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 16:10:11.854991   67244 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:13.862552   67244 pod_ready.go:102] pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace has status "Ready":"False"
	I0719 16:10:13.570139   68889 out.go:204]   - Configuring RBAC rules ...
	I0719 16:10:13.570319   68889 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 16:10:13.575854   68889 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 16:10:13.583656   68889 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 16:10:13.587481   68889 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 16:10:13.593973   68889 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 16:10:13.597765   68889 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 16:10:13.909178   68889 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 16:10:14.383031   68889 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 16:10:14.912477   68889 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 16:10:14.913979   68889 kubeadm.go:310] 
	I0719 16:10:14.914075   68889 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 16:10:14.914086   68889 kubeadm.go:310] 
	I0719 16:10:14.914203   68889 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 16:10:14.914216   68889 kubeadm.go:310] 
	I0719 16:10:14.914267   68889 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 16:10:14.914345   68889 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 16:10:14.914409   68889 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 16:10:14.914418   68889 kubeadm.go:310] 
	I0719 16:10:14.914522   68889 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 16:10:14.914540   68889 kubeadm.go:310] 
	I0719 16:10:14.914624   68889 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 16:10:14.914634   68889 kubeadm.go:310] 
	I0719 16:10:14.914718   68889 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 16:10:14.914831   68889 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 16:10:14.914928   68889 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 16:10:14.914938   68889 kubeadm.go:310] 
	I0719 16:10:14.915070   68889 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 16:10:14.915193   68889 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 16:10:14.915209   68889 kubeadm.go:310] 
	I0719 16:10:14.915331   68889 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2h6ho7.vyqv5d9xpmj07qzm \
	I0719 16:10:14.915471   68889 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 16:10:14.915507   68889 kubeadm.go:310] 	--control-plane 
	I0719 16:10:14.915517   68889 kubeadm.go:310] 
	I0719 16:10:14.915683   68889 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 16:10:14.915700   68889 kubeadm.go:310] 
	I0719 16:10:14.915816   68889 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2h6ho7.vyqv5d9xpmj07qzm \
	I0719 16:10:14.915950   68889 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 16:10:14.916146   68889 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 16:10:14.916169   68889 cni.go:84] Creating CNI manager for "bridge"
	I0719 16:10:14.917787   68889 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 16:10:14.919260   68889 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 16:10:14.934268   68889 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 16:10:14.958745   68889 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 16:10:14.958847   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:14.958899   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-526259 minikube.k8s.io/updated_at=2024_07_19T16_10_14_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=enable-default-cni-526259 minikube.k8s.io/primary=true
	I0719 16:10:11.458223   70654 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:10:11.458450   70654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:10:11.458491   70654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:10:11.479426   70654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0719 16:10:11.480025   70654 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:10:11.480789   70654 main.go:141] libmachine: Using API Version  1
	I0719 16:10:11.480819   70654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:10:11.481348   70654 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:10:11.481663   70654 main.go:141] libmachine: (bridge-526259) Calling .GetMachineName
	I0719 16:10:11.481938   70654 main.go:141] libmachine: (bridge-526259) Calling .DriverName
	I0719 16:10:11.482082   70654 start.go:159] libmachine.API.Create for "bridge-526259" (driver="kvm2")
	I0719 16:10:11.482114   70654 client.go:168] LocalClient.Create starting
	I0719 16:10:11.482148   70654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem
	I0719 16:10:11.482188   70654 main.go:141] libmachine: Decoding PEM data...
	I0719 16:10:11.482209   70654 main.go:141] libmachine: Parsing certificate...
	I0719 16:10:11.482344   70654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem
	I0719 16:10:11.482376   70654 main.go:141] libmachine: Decoding PEM data...
	I0719 16:10:11.482393   70654 main.go:141] libmachine: Parsing certificate...
	I0719 16:10:11.482417   70654 main.go:141] libmachine: Running pre-create checks...
	I0719 16:10:11.482432   70654 main.go:141] libmachine: (bridge-526259) Calling .PreCreateCheck
	I0719 16:10:11.482859   70654 main.go:141] libmachine: (bridge-526259) Calling .GetConfigRaw
	I0719 16:10:11.483302   70654 main.go:141] libmachine: Creating machine...
	I0719 16:10:11.483322   70654 main.go:141] libmachine: (bridge-526259) Calling .Create
	I0719 16:10:11.483518   70654 main.go:141] libmachine: (bridge-526259) Creating KVM machine...
	I0719 16:10:11.484922   70654 main.go:141] libmachine: (bridge-526259) DBG | found existing default KVM network
	I0719 16:10:11.486434   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.486268   70677 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:85:ed} reservation:<nil>}
	I0719 16:10:11.487369   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.487289   70677 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:14:81:8b} reservation:<nil>}
	I0719 16:10:11.488283   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.488184   70677 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:44:b3:3f} reservation:<nil>}
	I0719 16:10:11.489485   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.489381   70677 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380ef0}
	I0719 16:10:11.489512   70654 main.go:141] libmachine: (bridge-526259) DBG | created network xml: 
	I0719 16:10:11.489524   70654 main.go:141] libmachine: (bridge-526259) DBG | <network>
	I0719 16:10:11.489532   70654 main.go:141] libmachine: (bridge-526259) DBG |   <name>mk-bridge-526259</name>
	I0719 16:10:11.489540   70654 main.go:141] libmachine: (bridge-526259) DBG |   <dns enable='no'/>
	I0719 16:10:11.489547   70654 main.go:141] libmachine: (bridge-526259) DBG |   
	I0719 16:10:11.489561   70654 main.go:141] libmachine: (bridge-526259) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0719 16:10:11.489579   70654 main.go:141] libmachine: (bridge-526259) DBG |     <dhcp>
	I0719 16:10:11.489596   70654 main.go:141] libmachine: (bridge-526259) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0719 16:10:11.489606   70654 main.go:141] libmachine: (bridge-526259) DBG |     </dhcp>
	I0719 16:10:11.489612   70654 main.go:141] libmachine: (bridge-526259) DBG |   </ip>
	I0719 16:10:11.489619   70654 main.go:141] libmachine: (bridge-526259) DBG |   
	I0719 16:10:11.489627   70654 main.go:141] libmachine: (bridge-526259) DBG | </network>
	I0719 16:10:11.489637   70654 main.go:141] libmachine: (bridge-526259) DBG | 
	I0719 16:10:11.494833   70654 main.go:141] libmachine: (bridge-526259) DBG | trying to create private KVM network mk-bridge-526259 192.168.72.0/24...
	I0719 16:10:11.590673   70654 main.go:141] libmachine: (bridge-526259) DBG | private KVM network mk-bridge-526259 192.168.72.0/24 created
	I0719 16:10:11.590707   70654 main.go:141] libmachine: (bridge-526259) Setting up store path in /home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259 ...
	I0719 16:10:11.590724   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.590599   70677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:10:11.590750   70654 main.go:141] libmachine: (bridge-526259) Building disk image from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 16:10:11.590768   70654 main.go:141] libmachine: (bridge-526259) Downloading /home/jenkins/minikube-integration/19302-3847/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 16:10:11.845465   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.845366   70677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259/id_rsa...
	I0719 16:10:11.969028   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.968880   70677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259/bridge-526259.rawdisk...
	I0719 16:10:11.969068   70654 main.go:141] libmachine: (bridge-526259) DBG | Writing magic tar header
	I0719 16:10:11.969083   70654 main.go:141] libmachine: (bridge-526259) DBG | Writing SSH key tar header
	I0719 16:10:11.969095   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:11.969026   70677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259 ...
	I0719 16:10:11.969253   70654 main.go:141] libmachine: (bridge-526259) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259 (perms=drwx------)
	I0719 16:10:11.969290   70654 main.go:141] libmachine: (bridge-526259) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube/machines (perms=drwxr-xr-x)
	I0719 16:10:11.969302   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259
	I0719 16:10:11.969320   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube/machines
	I0719 16:10:11.969335   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:10:11.969346   70654 main.go:141] libmachine: (bridge-526259) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847/.minikube (perms=drwxr-xr-x)
	I0719 16:10:11.969358   70654 main.go:141] libmachine: (bridge-526259) Setting executable bit set on /home/jenkins/minikube-integration/19302-3847 (perms=drwxrwxr-x)
	I0719 16:10:11.969371   70654 main.go:141] libmachine: (bridge-526259) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 16:10:11.969384   70654 main.go:141] libmachine: (bridge-526259) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 16:10:11.969395   70654 main.go:141] libmachine: (bridge-526259) Creating domain...
	I0719 16:10:11.969411   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-3847
	I0719 16:10:11.969427   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 16:10:11.969437   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home/jenkins
	I0719 16:10:11.969447   70654 main.go:141] libmachine: (bridge-526259) DBG | Checking permissions on dir: /home
	I0719 16:10:11.969459   70654 main.go:141] libmachine: (bridge-526259) DBG | Skipping /home - not owner
	I0719 16:10:11.970517   70654 main.go:141] libmachine: (bridge-526259) define libvirt domain using xml: 
	I0719 16:10:11.970541   70654 main.go:141] libmachine: (bridge-526259) <domain type='kvm'>
	I0719 16:10:11.970551   70654 main.go:141] libmachine: (bridge-526259)   <name>bridge-526259</name>
	I0719 16:10:11.970558   70654 main.go:141] libmachine: (bridge-526259)   <memory unit='MiB'>3072</memory>
	I0719 16:10:11.970566   70654 main.go:141] libmachine: (bridge-526259)   <vcpu>2</vcpu>
	I0719 16:10:11.970576   70654 main.go:141] libmachine: (bridge-526259)   <features>
	I0719 16:10:11.970594   70654 main.go:141] libmachine: (bridge-526259)     <acpi/>
	I0719 16:10:11.970605   70654 main.go:141] libmachine: (bridge-526259)     <apic/>
	I0719 16:10:11.970613   70654 main.go:141] libmachine: (bridge-526259)     <pae/>
	I0719 16:10:11.970629   70654 main.go:141] libmachine: (bridge-526259)     
	I0719 16:10:11.970655   70654 main.go:141] libmachine: (bridge-526259)   </features>
	I0719 16:10:11.970679   70654 main.go:141] libmachine: (bridge-526259)   <cpu mode='host-passthrough'>
	I0719 16:10:11.970691   70654 main.go:141] libmachine: (bridge-526259)   
	I0719 16:10:11.970701   70654 main.go:141] libmachine: (bridge-526259)   </cpu>
	I0719 16:10:11.970709   70654 main.go:141] libmachine: (bridge-526259)   <os>
	I0719 16:10:11.970719   70654 main.go:141] libmachine: (bridge-526259)     <type>hvm</type>
	I0719 16:10:11.970728   70654 main.go:141] libmachine: (bridge-526259)     <boot dev='cdrom'/>
	I0719 16:10:11.970738   70654 main.go:141] libmachine: (bridge-526259)     <boot dev='hd'/>
	I0719 16:10:11.970760   70654 main.go:141] libmachine: (bridge-526259)     <bootmenu enable='no'/>
	I0719 16:10:11.970777   70654 main.go:141] libmachine: (bridge-526259)   </os>
	I0719 16:10:11.970798   70654 main.go:141] libmachine: (bridge-526259)   <devices>
	I0719 16:10:11.970827   70654 main.go:141] libmachine: (bridge-526259)     <disk type='file' device='cdrom'>
	I0719 16:10:11.970851   70654 main.go:141] libmachine: (bridge-526259)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259/boot2docker.iso'/>
	I0719 16:10:11.970868   70654 main.go:141] libmachine: (bridge-526259)       <target dev='hdc' bus='scsi'/>
	I0719 16:10:11.970879   70654 main.go:141] libmachine: (bridge-526259)       <readonly/>
	I0719 16:10:11.970888   70654 main.go:141] libmachine: (bridge-526259)     </disk>
	I0719 16:10:11.970898   70654 main.go:141] libmachine: (bridge-526259)     <disk type='file' device='disk'>
	I0719 16:10:11.970910   70654 main.go:141] libmachine: (bridge-526259)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 16:10:11.970923   70654 main.go:141] libmachine: (bridge-526259)       <source file='/home/jenkins/minikube-integration/19302-3847/.minikube/machines/bridge-526259/bridge-526259.rawdisk'/>
	I0719 16:10:11.970937   70654 main.go:141] libmachine: (bridge-526259)       <target dev='hda' bus='virtio'/>
	I0719 16:10:11.970949   70654 main.go:141] libmachine: (bridge-526259)     </disk>
	I0719 16:10:11.970963   70654 main.go:141] libmachine: (bridge-526259)     <interface type='network'>
	I0719 16:10:11.970974   70654 main.go:141] libmachine: (bridge-526259)       <source network='mk-bridge-526259'/>
	I0719 16:10:11.970988   70654 main.go:141] libmachine: (bridge-526259)       <model type='virtio'/>
	I0719 16:10:11.971000   70654 main.go:141] libmachine: (bridge-526259)     </interface>
	I0719 16:10:11.971011   70654 main.go:141] libmachine: (bridge-526259)     <interface type='network'>
	I0719 16:10:11.971023   70654 main.go:141] libmachine: (bridge-526259)       <source network='default'/>
	I0719 16:10:11.971034   70654 main.go:141] libmachine: (bridge-526259)       <model type='virtio'/>
	I0719 16:10:11.971042   70654 main.go:141] libmachine: (bridge-526259)     </interface>
	I0719 16:10:11.971052   70654 main.go:141] libmachine: (bridge-526259)     <serial type='pty'>
	I0719 16:10:11.971068   70654 main.go:141] libmachine: (bridge-526259)       <target port='0'/>
	I0719 16:10:11.971086   70654 main.go:141] libmachine: (bridge-526259)     </serial>
	I0719 16:10:11.971098   70654 main.go:141] libmachine: (bridge-526259)     <console type='pty'>
	I0719 16:10:11.971107   70654 main.go:141] libmachine: (bridge-526259)       <target type='serial' port='0'/>
	I0719 16:10:11.971127   70654 main.go:141] libmachine: (bridge-526259)     </console>
	I0719 16:10:11.971141   70654 main.go:141] libmachine: (bridge-526259)     <rng model='virtio'>
	I0719 16:10:11.971169   70654 main.go:141] libmachine: (bridge-526259)       <backend model='random'>/dev/random</backend>
	I0719 16:10:11.971201   70654 main.go:141] libmachine: (bridge-526259)     </rng>
	I0719 16:10:11.971221   70654 main.go:141] libmachine: (bridge-526259)     
	I0719 16:10:11.971227   70654 main.go:141] libmachine: (bridge-526259)     
	I0719 16:10:11.971238   70654 main.go:141] libmachine: (bridge-526259)   </devices>
	I0719 16:10:11.971246   70654 main.go:141] libmachine: (bridge-526259) </domain>
	I0719 16:10:11.971264   70654 main.go:141] libmachine: (bridge-526259) 
	I0719 16:10:11.975406   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:8b:63:71 in network default
	I0719 16:10:11.976043   70654 main.go:141] libmachine: (bridge-526259) Ensuring networks are active...
	I0719 16:10:11.976066   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:11.976784   70654 main.go:141] libmachine: (bridge-526259) Ensuring network default is active
	I0719 16:10:11.977104   70654 main.go:141] libmachine: (bridge-526259) Ensuring network mk-bridge-526259 is active
	I0719 16:10:11.977734   70654 main.go:141] libmachine: (bridge-526259) Getting domain xml...
	I0719 16:10:11.978384   70654 main.go:141] libmachine: (bridge-526259) Creating domain...
	I0719 16:10:13.399358   70654 main.go:141] libmachine: (bridge-526259) Waiting to get IP...
	I0719 16:10:13.400312   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:13.400877   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:13.400905   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:13.400861   70677 retry.go:31] will retry after 267.407168ms: waiting for machine to come up
	I0719 16:10:13.670317   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:13.670875   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:13.670907   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:13.670852   70677 retry.go:31] will retry after 353.899211ms: waiting for machine to come up
	I0719 16:10:14.026458   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:14.027055   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:14.027096   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:14.027013   70677 retry.go:31] will retry after 426.573159ms: waiting for machine to come up
	I0719 16:10:14.455761   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:14.456378   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:14.456404   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:14.456329   70677 retry.go:31] will retry after 464.468229ms: waiting for machine to come up
	I0719 16:10:14.922000   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:14.922558   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:14.922583   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:14.922524   70677 retry.go:31] will retry after 466.511854ms: waiting for machine to come up
	I0719 16:10:15.390327   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:15.390841   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:15.390887   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:15.390792   70677 retry.go:31] will retry after 789.387845ms: waiting for machine to come up
	I0719 16:10:16.181941   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:16.182521   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:16.182550   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:16.182475   70677 retry.go:31] will retry after 910.622629ms: waiting for machine to come up
	I0719 16:10:15.862956   67244 pod_ready.go:102] pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace has status "Ready":"False"
	I0719 16:10:18.360764   67244 pod_ready.go:102] pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace has status "Ready":"False"
	I0719 16:10:15.129944   68889 ops.go:34] apiserver oom_adj: -16
	I0719 16:10:15.130084   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:15.630788   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:16.130253   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:16.630555   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:17.130825   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:17.631070   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:18.130360   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:18.630348   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:19.130261   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:19.631108   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:17.094751   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:17.095222   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:17.095255   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:17.095134   70677 retry.go:31] will retry after 1.034134754s: waiting for machine to come up
	I0719 16:10:18.131543   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:18.132037   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:18.132066   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:18.131982   70677 retry.go:31] will retry after 1.492863988s: waiting for machine to come up
	I0719 16:10:19.626841   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:19.627302   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:19.627368   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:19.627283   70677 retry.go:31] will retry after 1.808155573s: waiting for machine to come up
	I0719 16:10:20.361483   67244 pod_ready.go:102] pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace has status "Ready":"False"
	I0719 16:10:22.362338   67244 pod_ready.go:102] pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace has status "Ready":"False"
	I0719 16:10:23.860978   67244 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace has status "Ready":"True"
	I0719 16:10:23.861008   67244 pod_ready.go:81] duration metric: took 12.005984451s for pod "coredns-7db6d8ff4d-w6pnh" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.861021   67244 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.867520   67244 pod_ready.go:92] pod "etcd-flannel-526259" in "kube-system" namespace has status "Ready":"True"
	I0719 16:10:23.867551   67244 pod_ready.go:81] duration metric: took 6.52263ms for pod "etcd-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.867560   67244 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.871734   67244 pod_ready.go:92] pod "kube-apiserver-flannel-526259" in "kube-system" namespace has status "Ready":"True"
	I0719 16:10:23.871756   67244 pod_ready.go:81] duration metric: took 4.188297ms for pod "kube-apiserver-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.871767   67244 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.875786   67244 pod_ready.go:92] pod "kube-controller-manager-flannel-526259" in "kube-system" namespace has status "Ready":"True"
	I0719 16:10:23.875809   67244 pod_ready.go:81] duration metric: took 4.033911ms for pod "kube-controller-manager-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.875821   67244 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-zfzhl" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.879912   67244 pod_ready.go:92] pod "kube-proxy-zfzhl" in "kube-system" namespace has status "Ready":"True"
	I0719 16:10:23.879939   67244 pod_ready.go:81] duration metric: took 4.105908ms for pod "kube-proxy-zfzhl" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:23.879951   67244 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:24.258744   67244 pod_ready.go:92] pod "kube-scheduler-flannel-526259" in "kube-system" namespace has status "Ready":"True"
	I0719 16:10:24.258767   67244 pod_ready.go:81] duration metric: took 378.808511ms for pod "kube-scheduler-flannel-526259" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:24.258778   67244 pod_ready.go:38] duration metric: took 12.412100602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 16:10:24.258792   67244 api_server.go:52] waiting for apiserver process to appear ...
	I0719 16:10:24.258839   67244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 16:10:24.274176   67244 api_server.go:72] duration metric: took 25.772296594s to wait for apiserver process to appear ...
	I0719 16:10:24.274204   67244 api_server.go:88] waiting for apiserver healthz status ...
	I0719 16:10:24.274226   67244 api_server.go:253] Checking apiserver healthz at https://192.168.50.189:8443/healthz ...
	I0719 16:10:24.280731   67244 api_server.go:279] https://192.168.50.189:8443/healthz returned 200:
	ok
	I0719 16:10:24.282094   67244 api_server.go:141] control plane version: v1.30.3
	I0719 16:10:24.282125   67244 api_server.go:131] duration metric: took 7.913918ms to wait for apiserver health ...
	I0719 16:10:24.282137   67244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 16:10:24.460790   67244 system_pods.go:59] 7 kube-system pods found
	I0719 16:10:24.460819   67244 system_pods.go:61] "coredns-7db6d8ff4d-w6pnh" [686f1306-9eb9-4482-aff7-61d6fed09b3c] Running
	I0719 16:10:24.460824   67244 system_pods.go:61] "etcd-flannel-526259" [9d75fc9c-bb4e-434b-8386-50da4398c595] Running
	I0719 16:10:24.460829   67244 system_pods.go:61] "kube-apiserver-flannel-526259" [65f7d193-de15-48b0-9d96-7bbdac238911] Running
	I0719 16:10:24.460832   67244 system_pods.go:61] "kube-controller-manager-flannel-526259" [0e6323c4-42ab-4dd8-9d1b-a57bfa7bbefd] Running
	I0719 16:10:24.460836   67244 system_pods.go:61] "kube-proxy-zfzhl" [acea0b91-2672-4d22-a1bc-973092126360] Running
	I0719 16:10:24.460839   67244 system_pods.go:61] "kube-scheduler-flannel-526259" [e097d652-6118-47a8-88a5-892e47ca1424] Running
	I0719 16:10:24.460842   67244 system_pods.go:61] "storage-provisioner" [ff36ceae-e43d-4ab6-8a0c-8c4943972dd0] Running
	I0719 16:10:24.460847   67244 system_pods.go:74] duration metric: took 178.703534ms to wait for pod list to return data ...
	I0719 16:10:24.460854   67244 default_sa.go:34] waiting for default service account to be created ...
	I0719 16:10:24.658805   67244 default_sa.go:45] found service account: "default"
	I0719 16:10:24.658829   67244 default_sa.go:55] duration metric: took 197.970126ms for default service account to be created ...
	I0719 16:10:24.658838   67244 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 16:10:20.130249   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:20.630776   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:21.131064   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:21.631046   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:22.131093   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:22.630644   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:23.130675   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:23.630760   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:24.130151   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:24.631039   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:24.861774   67244 system_pods.go:86] 7 kube-system pods found
	I0719 16:10:24.861799   67244 system_pods.go:89] "coredns-7db6d8ff4d-w6pnh" [686f1306-9eb9-4482-aff7-61d6fed09b3c] Running
	I0719 16:10:24.861804   67244 system_pods.go:89] "etcd-flannel-526259" [9d75fc9c-bb4e-434b-8386-50da4398c595] Running
	I0719 16:10:24.861808   67244 system_pods.go:89] "kube-apiserver-flannel-526259" [65f7d193-de15-48b0-9d96-7bbdac238911] Running
	I0719 16:10:24.861812   67244 system_pods.go:89] "kube-controller-manager-flannel-526259" [0e6323c4-42ab-4dd8-9d1b-a57bfa7bbefd] Running
	I0719 16:10:24.861817   67244 system_pods.go:89] "kube-proxy-zfzhl" [acea0b91-2672-4d22-a1bc-973092126360] Running
	I0719 16:10:24.861821   67244 system_pods.go:89] "kube-scheduler-flannel-526259" [e097d652-6118-47a8-88a5-892e47ca1424] Running
	I0719 16:10:24.861824   67244 system_pods.go:89] "storage-provisioner" [ff36ceae-e43d-4ab6-8a0c-8c4943972dd0] Running
	I0719 16:10:24.861829   67244 system_pods.go:126] duration metric: took 202.987428ms to wait for k8s-apps to be running ...
	I0719 16:10:24.861836   67244 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 16:10:24.861878   67244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 16:10:24.877638   67244 system_svc.go:56] duration metric: took 15.792822ms WaitForService to wait for kubelet
	I0719 16:10:24.877671   67244 kubeadm.go:582] duration metric: took 26.375794602s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:10:24.877701   67244 node_conditions.go:102] verifying NodePressure condition ...
	I0719 16:10:25.059537   67244 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 16:10:25.059562   67244 node_conditions.go:123] node cpu capacity is 2
	I0719 16:10:25.059572   67244 node_conditions.go:105] duration metric: took 181.866003ms to run NodePressure ...
	I0719 16:10:25.059583   67244 start.go:241] waiting for startup goroutines ...
	I0719 16:10:25.059589   67244 start.go:246] waiting for cluster config update ...
	I0719 16:10:25.059599   67244 start.go:255] writing updated cluster config ...
	I0719 16:10:25.059842   67244 ssh_runner.go:195] Run: rm -f paused
	I0719 16:10:25.108364   67244 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 16:10:25.110295   67244 out.go:177] * Done! kubectl is now configured to use "flannel-526259" cluster and "default" namespace by default
	I0719 16:10:21.437404   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:21.437974   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:21.438001   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:21.437926   70677 retry.go:31] will retry after 2.735924416s: waiting for machine to come up
	I0719 16:10:24.175647   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:24.176105   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:24.176132   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:24.176069   70677 retry.go:31] will retry after 2.559846713s: waiting for machine to come up
	I0719 16:10:25.130413   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:25.630717   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:26.130956   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:26.630310   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:27.130352   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:27.630209   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:28.131046   68889 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:10:28.233952   68889 kubeadm.go:1113] duration metric: took 13.275160732s to wait for elevateKubeSystemPrivileges
	I0719 16:10:28.233992   68889 kubeadm.go:394] duration metric: took 24.960926965s to StartCluster
	I0719 16:10:28.234013   68889 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:10:28.234106   68889 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 16:10:28.235707   68889 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:10:28.235936   68889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 16:10:28.235968   68889 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 16:10:28.236029   68889 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-526259"
	I0719 16:10:28.236060   68889 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-526259"
	I0719 16:10:28.235949   68889 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.69 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 16:10:28.236089   68889 host.go:66] Checking if "enable-default-cni-526259" exists ...
	I0719 16:10:28.236124   68889 config.go:182] Loaded profile config "enable-default-cni-526259": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:10:28.236124   68889 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-526259"
	I0719 16:10:28.236166   68889 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-526259"
	I0719 16:10:28.236467   68889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:10:28.236495   68889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:10:28.236567   68889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:10:28.236604   68889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:10:28.237587   68889 out.go:177] * Verifying Kubernetes components...
	I0719 16:10:28.238822   68889 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:10:28.251741   68889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41583
	I0719 16:10:28.251776   68889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I0719 16:10:28.252165   68889 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:10:28.252185   68889 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:10:28.252645   68889 main.go:141] libmachine: Using API Version  1
	I0719 16:10:28.252645   68889 main.go:141] libmachine: Using API Version  1
	I0719 16:10:28.252666   68889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:10:28.252675   68889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:10:28.253006   68889 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:10:28.253057   68889 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:10:28.253265   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetState
	I0719 16:10:28.253481   68889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:10:28.253517   68889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:10:28.257288   68889 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-526259"
	I0719 16:10:28.257329   68889 host.go:66] Checking if "enable-default-cni-526259" exists ...
	I0719 16:10:28.257700   68889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:10:28.257741   68889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:10:28.269773   68889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I0719 16:10:28.270334   68889 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:10:28.271151   68889 main.go:141] libmachine: Using API Version  1
	I0719 16:10:28.271196   68889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:10:28.271559   68889 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:10:28.271988   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetState
	I0719 16:10:28.274069   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .DriverName
	I0719 16:10:28.275106   68889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36265
	I0719 16:10:28.275470   68889 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:10:28.276227   68889 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:10:28.276907   68889 main.go:141] libmachine: Using API Version  1
	I0719 16:10:28.276929   68889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:10:28.277226   68889 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:10:28.277535   68889 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:10:28.277551   68889 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 16:10:28.277689   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHHostname
	I0719 16:10:28.277902   68889 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 16:10:28.277953   68889 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 16:10:28.281425   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | domain enable-default-cni-526259 has defined MAC address 52:54:00:30:75:0e in network mk-enable-default-cni-526259
	I0719 16:10:28.282181   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:75:0e", ip: ""} in network mk-enable-default-cni-526259: {Iface:virbr1 ExpiryTime:2024-07-19 17:09:45 +0000 UTC Type:0 Mac:52:54:00:30:75:0e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:enable-default-cni-526259 Clientid:01:52:54:00:30:75:0e}
	I0719 16:10:28.282214   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | domain enable-default-cni-526259 has defined IP address 192.168.39.69 and MAC address 52:54:00:30:75:0e in network mk-enable-default-cni-526259
	I0719 16:10:28.282280   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHPort
	I0719 16:10:28.282460   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHKeyPath
	I0719 16:10:28.282604   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHUsername
	I0719 16:10:28.282723   68889 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/enable-default-cni-526259/id_rsa Username:docker}
	I0719 16:10:28.295600   68889 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0719 16:10:28.296076   68889 main.go:141] libmachine: () Calling .GetVersion
	I0719 16:10:28.296568   68889 main.go:141] libmachine: Using API Version  1
	I0719 16:10:28.296599   68889 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 16:10:28.296952   68889 main.go:141] libmachine: () Calling .GetMachineName
	I0719 16:10:28.297117   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetState
	I0719 16:10:28.299070   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .DriverName
	I0719 16:10:28.299283   68889 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 16:10:28.299312   68889 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 16:10:28.299336   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHHostname
	I0719 16:10:28.302864   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | domain enable-default-cni-526259 has defined MAC address 52:54:00:30:75:0e in network mk-enable-default-cni-526259
	I0719 16:10:28.303368   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:75:0e", ip: ""} in network mk-enable-default-cni-526259: {Iface:virbr1 ExpiryTime:2024-07-19 17:09:45 +0000 UTC Type:0 Mac:52:54:00:30:75:0e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:enable-default-cni-526259 Clientid:01:52:54:00:30:75:0e}
	I0719 16:10:28.303385   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | domain enable-default-cni-526259 has defined IP address 192.168.39.69 and MAC address 52:54:00:30:75:0e in network mk-enable-default-cni-526259
	I0719 16:10:28.303588   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHPort
	I0719 16:10:28.303737   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHKeyPath
	I0719 16:10:28.303896   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .GetSSHUsername
	I0719 16:10:28.304058   68889 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/enable-default-cni-526259/id_rsa Username:docker}
	I0719 16:10:28.483186   68889 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 16:10:28.488745   68889 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 16:10:28.658348   68889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 16:10:28.790809   68889 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:10:28.940761   68889 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 16:10:28.940857   68889 main.go:141] libmachine: Making call to close driver server
	I0719 16:10:28.940879   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .Close
	I0719 16:10:28.941196   68889 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:10:28.941210   68889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:10:28.941219   68889 main.go:141] libmachine: Making call to close driver server
	I0719 16:10:28.941226   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .Close
	I0719 16:10:28.941494   68889 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:10:28.941512   68889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:10:28.941514   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | Closing plugin on server side
	I0719 16:10:28.942119   68889 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-526259" to be "Ready" ...
	I0719 16:10:28.962413   68889 node_ready.go:49] node "enable-default-cni-526259" has status "Ready":"True"
	I0719 16:10:28.962438   68889 node_ready.go:38] duration metric: took 20.29353ms for node "enable-default-cni-526259" to be "Ready" ...
	I0719 16:10:28.962449   68889 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 16:10:28.968106   68889 main.go:141] libmachine: Making call to close driver server
	I0719 16:10:28.968131   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .Close
	I0719 16:10:28.968414   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | Closing plugin on server side
	I0719 16:10:28.968481   68889 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:10:28.968493   68889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:10:28.974141   68889 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-2qlsm" in "kube-system" namespace to be "Ready" ...
	I0719 16:10:29.449873   68889 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-526259" context rescaled to 1 replicas
	I0719 16:10:29.577747   68889 main.go:141] libmachine: Making call to close driver server
	I0719 16:10:29.577776   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .Close
	I0719 16:10:29.578138   68889 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:10:29.578161   68889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:10:29.578176   68889 main.go:141] libmachine: Making call to close driver server
	I0719 16:10:29.578185   68889 main.go:141] libmachine: (enable-default-cni-526259) Calling .Close
	I0719 16:10:29.578513   68889 main.go:141] libmachine: (enable-default-cni-526259) DBG | Closing plugin on server side
	I0719 16:10:29.578513   68889 main.go:141] libmachine: Successfully made call to close driver server
	I0719 16:10:29.578543   68889 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 16:10:29.580374   68889 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 16:10:29.581780   68889 addons.go:510] duration metric: took 1.345806873s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0719 16:10:26.737548   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:26.738009   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:26.738035   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:26.737966   70677 retry.go:31] will retry after 2.853465326s: waiting for machine to come up
	I0719 16:10:29.595160   70654 main.go:141] libmachine: (bridge-526259) DBG | domain bridge-526259 has defined MAC address 52:54:00:ed:32:aa in network mk-bridge-526259
	I0719 16:10:29.595716   70654 main.go:141] libmachine: (bridge-526259) DBG | unable to find current IP address of domain bridge-526259 in network mk-bridge-526259
	I0719 16:10:29.595763   70654 main.go:141] libmachine: (bridge-526259) DBG | I0719 16:10:29.595686   70677 retry.go:31] will retry after 3.956287658s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.191734251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405435191605251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7e3791b-c842-4439-bc1a-23a3d2250f23 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.192473926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8618eb67-8871-471a-a73d-a4248da02d9f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.192545967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8618eb67-8871-471a-a73d-a4248da02d9f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.192890710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8618eb67-8871-471a-a73d-a4248da02d9f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.242435368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4759009f-93ef-4c11-88c3-bcaf22991b18 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.242541585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4759009f-93ef-4c11-88c3-bcaf22991b18 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.244101986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f325d851-c0fe-44b9-ba19-515a2bb1ef87 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.244813495Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405435244776924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f325d851-c0fe-44b9-ba19-515a2bb1ef87 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.245787484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f10c5a5c-da22-427f-a7c7-2afba797c2c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.245867601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f10c5a5c-da22-427f-a7c7-2afba797c2c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.246157628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f10c5a5c-da22-427f-a7c7-2afba797c2c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.294651070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff61e669-5a61-4a82-81b5-46389b2d34f7 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.294755170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff61e669-5a61-4a82-81b5-46389b2d34f7 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.296519230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f73640d0-dbb6-4ed2-a574-630a5be36cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.297442281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405435297404992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f73640d0-dbb6-4ed2-a574-630a5be36cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.298166091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99737376-fbe9-41a8-8be0-4073b3a4800d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.298270626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99737376-fbe9-41a8-8be0-4073b3a4800d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.298661518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99737376-fbe9-41a8-8be0-4073b3a4800d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.337940731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aefdb583-2583-4ca0-8e39-4160db9d2373 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.338034700Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aefdb583-2583-4ca0-8e39-4160db9d2373 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.339490795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a402357-bd32-400b-bc4f-15a4404257d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.339986923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405435339962110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a402357-bd32-400b-bc4f-15a4404257d3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.340589482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59cce621-2b5a-488e-a337-05b6af0dd37c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.340700922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59cce621-2b5a-488e-a337-05b6af0dd37c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:10:35 default-k8s-diff-port-601445 crio[729]: time="2024-07-19 16:10:35.341107048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404123001961526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3133206986d52f32580778ce3057f740161e5ab5105e0b1c5dbfc8bbf25482e6,PodSandboxId:6cec0feb733592246089a5abd72b9c13fc363c38a8ac85efaed387e39a85b6fd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404111787411084,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 101e74e5-8412-4a68-a1f7-723678a7324e,},Annotations:map[string]string{io.kubernetes.container.hash: 76afe21b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54,PodSandboxId:07b01e0804302e3dea3fa2f78cb7523a7badd760c0272aeba46b16032352da16,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404108499172981,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z7865,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c756208f-51b9-4a5a-932e-d7d38408a532,},Annotations:map[string]string{io.kubernetes.container.hash: 24ab5b69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b,PodSandboxId:1aa11435d46209118b753579eb0946b417d3260d8a8a6e1b42432139bee0097f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404092224827945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4dd721a2-a6f5-4aad-b86d-692d351a6fcf,},Annotations:map[string]string{io.kubernetes.container.hash: 23488f15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912,PodSandboxId:e574b4ae053d95d50e1c7411985a3f8766ae9db8a7f7ed4201514fedae948745,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404092209702239,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r7b2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24eff210-56a6-4b1b-bc19
-7c492c5ce997,},Annotations:map[string]string{io.kubernetes.container.hash: bcad78dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a,PodSandboxId:448550de9f91f09ff56ba9bed5d98956dbe9f5a7da7f46a1dc40b0b6e58ba099,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404088558721764,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a68bc3361a4fe2e287ed3
75664c589aa,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b,PodSandboxId:510612ad4f1ca4a56435a3f122d7ae59dcd0020e479f4741d87c142d73172be6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404088526784898,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d142c2a8e977d7b04e6d8f64e9ffb637,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 92ff3e38,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b,PodSandboxId:ce332af1c8756399469cb6481db1350de5ec03f8bc3dbef74f5e70d9e1341135,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404088535962838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5cb70c1579941a5f13433bb2c77
3c2f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236,PodSandboxId:038bb23c12bf5ab26ec7baefeff2f1ac1997189359800f77b77dcd6688f74ad1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404088428911582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-601445,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9443a5248652ef7aad40924929f72
a7,},Annotations:map[string]string{io.kubernetes.container.hash: 38fb9e11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59cce621-2b5a-488e-a337-05b6af0dd37c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85352e7e71d12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   1aa11435d4620       storage-provisioner
	3133206986d52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   6cec0feb73359       busybox
	001c96d3b9669       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   07b01e0804302       coredns-7db6d8ff4d-z7865
	5a58e1c6658a8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   1aa11435d4620       storage-provisioner
	6d295bc6e6fb8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      22 minutes ago      Running             kube-proxy                1                   e574b4ae053d9       kube-proxy-r7b2z
	1f566fdead149       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      22 minutes ago      Running             kube-scheduler            1                   448550de9f91f       kube-scheduler-default-k8s-diff-port-601445
	c693018988910       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      22 minutes ago      Running             kube-controller-manager   1                   ce332af1c8756       kube-controller-manager-default-k8s-diff-port-601445
	60e7b95877d59       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   510612ad4f1ca       etcd-default-k8s-diff-port-601445
	65610b0e92d14       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      22 minutes ago      Running             kube-apiserver            1                   038bb23c12bf5       kube-apiserver-default-k8s-diff-port-601445
	
	
	==> coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51615 - 3885 "HINFO IN 6928262908906125533.6899998174746735126. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015049395s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-601445
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-601445
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=default-k8s-diff-port-601445
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_41_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:41:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-601445
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 16:10:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 16:09:07 +0000   Fri, 19 Jul 2024 15:41:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 16:09:07 +0000   Fri, 19 Jul 2024 15:41:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 16:09:07 +0000   Fri, 19 Jul 2024 15:41:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 16:09:07 +0000   Fri, 19 Jul 2024 15:48:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.144
	  Hostname:    default-k8s-diff-port-601445
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c28d45a5c4b438483c32c75a35bff56
	  System UUID:                0c28d45a-5c4b-4384-83c3-2c75a35bff56
	  Boot ID:                    4183ade3-b8bd-4f96-98e9-4b60579e710a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-z7865                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-601445                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-601445             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-601445    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-r7b2z                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-601445             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-h7hgv                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-601445 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-601445 event: Registered Node default-k8s-diff-port-601445 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-601445 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-601445 event: Registered Node default-k8s-diff-port-601445 in Controller
	
	
	==> dmesg <==
	[Jul19 15:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053199] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049149] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.916188] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.425066] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.618348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.593967] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.063637] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061172] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Jul19 15:48] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.147635] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.289072] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.498152] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.065072] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.075217] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +4.612178] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.415316] systemd-fstab-generator[1532]: Ignoring "noauto" option for root device
	[  +5.286644] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.795950] kauditd_printk_skb: 13 callbacks suppressed
	[ +15.405901] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] <==
	{"level":"warn","ts":"2024-07-19T16:07:58.335298Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.684481ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6500542316625965036 > lease_revoke:<id:5a3690cbade2afa0>","response":"size:28"}
	{"level":"info","ts":"2024-07-19T16:07:58.335737Z","caller":"traceutil/trace.go:171","msg":"trace[801836719] linearizableReadLoop","detail":"{readStateIndex:1844; appliedIndex:1843; }","duration":"182.630481ms","start":"2024-07-19T16:07:58.153065Z","end":"2024-07-19T16:07:58.335695Z","steps":["trace[801836719] 'read index received'  (duration: 36.854µs)","trace[801836719] 'applied index is now lower than readState.Index'  (duration: 182.592042ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T16:07:58.335966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.794008ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T16:07:58.336058Z","caller":"traceutil/trace.go:171","msg":"trace[345829461] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1561; }","duration":"183.010372ms","start":"2024-07-19T16:07:58.153034Z","end":"2024-07-19T16:07:58.336044Z","steps":["trace[345829461] 'agreement among raft nodes before linearized reading'  (duration: 182.783696ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T16:07:58.470572Z","caller":"traceutil/trace.go:171","msg":"trace[528945002] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"121.396619ms","start":"2024-07-19T16:07:58.349161Z","end":"2024-07-19T16:07:58.470557Z","steps":["trace[528945002] 'process raft request'  (duration: 121.285942ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T16:08:02.955656Z","caller":"traceutil/trace.go:171","msg":"trace[1799806911] transaction","detail":"{read_only:false; response_revision:1565; number_of_response:1; }","duration":"144.944224ms","start":"2024-07-19T16:08:02.810638Z","end":"2024-07-19T16:08:02.955582Z","steps":["trace[1799806911] 'process raft request'  (duration: 62.213076ms)","trace[1799806911] 'compare'  (duration: 82.526052ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T16:08:10.153671Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1328}
	{"level":"info","ts":"2024-07-19T16:08:10.157919Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1328,"took":"3.952448ms","hash":2873390922,"current-db-size-bytes":2727936,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-19T16:08:10.157969Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2873390922,"revision":1328,"compact-revision":1085}
	{"level":"info","ts":"2024-07-19T16:08:42.818012Z","caller":"traceutil/trace.go:171","msg":"trace[725173430] transaction","detail":"{read_only:false; response_revision:1598; number_of_response:1; }","duration":"101.807501ms","start":"2024-07-19T16:08:42.716184Z","end":"2024-07-19T16:08:42.817991Z","steps":["trace[725173430] 'process raft request'  (duration: 101.426782ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T16:08:43.066655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.963876ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6500542316625965252 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:5a3690cbade2b0c3>","response":"size:40"}
	{"level":"info","ts":"2024-07-19T16:08:43.066949Z","caller":"traceutil/trace.go:171","msg":"trace[1505266470] linearizableReadLoop","detail":"{readStateIndex:1892; appliedIndex:1891; }","duration":"165.50049ms","start":"2024-07-19T16:08:42.901435Z","end":"2024-07-19T16:08:43.066936Z","steps":["trace[1505266470] 'read index received'  (duration: 41.166984ms)","trace[1505266470] 'applied index is now lower than readState.Index'  (duration: 124.332028ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T16:08:43.067069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.639539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T16:08:43.067138Z","caller":"traceutil/trace.go:171","msg":"trace[2032318967] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1598; }","duration":"165.744099ms","start":"2024-07-19T16:08:42.901384Z","end":"2024-07-19T16:08:43.067128Z","steps":["trace[2032318967] 'agreement among raft nodes before linearized reading'  (duration: 165.632863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T16:08:43.327839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.282713ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6500542316625965255 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.144\" mod_revision:1591 > success:<request_put:<key:\"/registry/masterleases/192.168.61.144\" value_size:67 lease:6500542316625965251 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.144\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T16:08:43.327951Z","caller":"traceutil/trace.go:171","msg":"trace[1417034802] transaction","detail":"{read_only:false; response_revision:1599; number_of_response:1; }","duration":"259.685497ms","start":"2024-07-19T16:08:43.068252Z","end":"2024-07-19T16:08:43.327937Z","steps":["trace[1417034802] 'process raft request'  (duration: 127.931325ms)","trace[1417034802] 'compare'  (duration: 131.188656ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T16:08:54.959706Z","caller":"traceutil/trace.go:171","msg":"trace[125631802] transaction","detail":"{read_only:false; response_revision:1608; number_of_response:1; }","duration":"361.000725ms","start":"2024-07-19T16:08:54.598681Z","end":"2024-07-19T16:08:54.959682Z","steps":["trace[125631802] 'process raft request'  (duration: 360.847155ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T16:08:54.95989Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T16:08:54.598663Z","time spent":"361.147314ms","remote":"127.0.0.1:60468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-l4z637xarijv3q3yylcbl2nd5q\" mod_revision:1600 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-l4z637xarijv3q3yylcbl2nd5q\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-l4z637xarijv3q3yylcbl2nd5q\" > >"}
	{"level":"warn","ts":"2024-07-19T16:08:55.25143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.843072ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6500542316625965314 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1607 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T16:08:55.251531Z","caller":"traceutil/trace.go:171","msg":"trace[640007654] transaction","detail":"{read_only:false; response_revision:1609; number_of_response:1; }","duration":"284.467622ms","start":"2024-07-19T16:08:54.967048Z","end":"2024-07-19T16:08:55.251515Z","steps":["trace[640007654] 'process raft request'  (duration: 132.421859ms)","trace[640007654] 'compare'  (duration: 151.709298ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T16:10:03.182592Z","caller":"traceutil/trace.go:171","msg":"trace[534325217] linearizableReadLoop","detail":"{readStateIndex:1975; appliedIndex:1974; }","duration":"279.296275ms","start":"2024-07-19T16:10:02.903251Z","end":"2024-07-19T16:10:03.182547Z","steps":["trace[534325217] 'read index received'  (duration: 183.363319ms)","trace[534325217] 'applied index is now lower than readState.Index'  (duration: 95.931453ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T16:10:03.182864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.528739ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T16:10:03.182929Z","caller":"traceutil/trace.go:171","msg":"trace[1968172908] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1665; }","duration":"279.66992ms","start":"2024-07-19T16:10:02.903245Z","end":"2024-07-19T16:10:03.182915Z","steps":["trace[1968172908] 'agreement among raft nodes before linearized reading'  (duration: 279.47788ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T16:10:03.183132Z","caller":"traceutil/trace.go:171","msg":"trace[306729927] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"372.498567ms","start":"2024-07-19T16:10:02.810618Z","end":"2024-07-19T16:10:03.183116Z","steps":["trace[306729927] 'process raft request'  (duration: 276.016357ms)","trace[306729927] 'compare'  (duration: 95.766179ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T16:10:03.183301Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T16:10:02.810602Z","time spent":"372.580992ms","remote":"127.0.0.1:60244","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":121,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.144\" mod_revision:1656 > success:<request_put:<key:\"/registry/masterleases/192.168.61.144\" value_size:68 lease:6500542316625965650 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.144\" > >"}
	
	
	==> kernel <==
	 16:10:35 up 22 min,  0 users,  load average: 0.06, 0.08, 0.07
	Linux default-k8s-diff-port-601445 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] <==
	E0719 16:06:12.520100       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:06:12.520130       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:08:11.521949       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:08:11.522475       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 16:08:12.523432       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:08:12.523517       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:08:12.523524       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:08:12.523436       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:08:12.523549       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:08:12.524769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 16:08:43.328398       1 trace.go:236] Trace[319888322]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.144,type:*v1.Endpoints,resource:apiServerIPInfo (19-Jul-2024 16:08:42.753) (total time: 574ms):
	Trace[319888322]: ---"initial value restored" 64ms (16:08:42.818)
	Trace[319888322]: ---"Transaction prepared" 248ms (16:08:43.067)
	Trace[319888322]: ---"Txn call completed" 260ms (16:08:43.328)
	Trace[319888322]: [574.390961ms] [574.390961ms] END
	W0719 16:09:12.523909       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:09:12.523984       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:09:12.523991       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:09:12.525138       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:09:12.525233       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:09:12.525262       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] <==
	I0719 16:04:54.732698       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:05:24.083240       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:05:24.741011       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:05:54.088457       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:05:54.749129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:06:24.095783       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:06:24.756879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:06:54.101932       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:06:54.765416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:07:24.108155       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:07:24.775782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:07:54.115874       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:07:54.787449       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:08:24.121628       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:08:24.795056       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:08:54.128736       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:08:54.806802       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:09:24.134834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:09:24.816075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 16:09:48.823946       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="283.309µs"
	E0719 16:09:54.141511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:09:54.824299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 16:10:00.824698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="827.213µs"
	E0719 16:10:24.148299       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:10:24.833121       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] <==
	I0719 15:48:12.395519       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:48:12.405662       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.144"]
	I0719 15:48:12.457549       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:48:12.457690       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:48:12.457731       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:48:12.462761       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:48:12.463074       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:48:12.463142       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:12.466208       1 config.go:192] "Starting service config controller"
	I0719 15:48:12.466750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:48:12.466847       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:48:12.466871       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:48:12.467388       1 config.go:319] "Starting node config controller"
	I0719 15:48:12.468132       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:48:12.567151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:48:12.567429       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:48:12.568294       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] <==
	I0719 15:48:09.514298       1 serving.go:380] Generated self-signed cert in-memory
	W0719 15:48:11.487694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 15:48:11.487790       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 15:48:11.487802       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 15:48:11.487808       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 15:48:11.539478       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:48:11.539520       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:11.545847       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:48:11.545927       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:48:11.546794       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:48:11.550543       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 15:48:11.646594       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 16:08:07 default-k8s-diff-port-601445 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:08:18 default-k8s-diff-port-601445 kubelet[940]: E0719 16:08:18.808622     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:08:30 default-k8s-diff-port-601445 kubelet[940]: E0719 16:08:30.809365     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:08:41 default-k8s-diff-port-601445 kubelet[940]: E0719 16:08:41.808852     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:08:55 default-k8s-diff-port-601445 kubelet[940]: E0719 16:08:55.809856     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:09:06 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:06.809529     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:09:07 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:07.828423     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:09:07 default-k8s-diff-port-601445 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:09:07 default-k8s-diff-port-601445 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:09:07 default-k8s-diff-port-601445 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:09:07 default-k8s-diff-port-601445 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:09:19 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:19.810914     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:09:33 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:33.823954     940 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 16:09:33 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:33.824549     940 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 19 16:09:33 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:33.824890     940 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbh8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-h7hgv_kube-system(9b4cdf2e-e6fc-4d88-99f1-31066805f915): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 19 16:09:33 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:33.825439     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:09:48 default-k8s-diff-port-601445 kubelet[940]: E0719 16:09:48.807956     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:10:00 default-k8s-diff-port-601445 kubelet[940]: E0719 16:10:00.808851     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:10:07 default-k8s-diff-port-601445 kubelet[940]: E0719 16:10:07.828106     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:10:07 default-k8s-diff-port-601445 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:10:07 default-k8s-diff-port-601445 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:10:07 default-k8s-diff-port-601445 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:10:07 default-k8s-diff-port-601445 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:10:15 default-k8s-diff-port-601445 kubelet[940]: E0719 16:10:15.808152     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	Jul 19 16:10:29 default-k8s-diff-port-601445 kubelet[940]: E0719 16:10:29.808586     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-h7hgv" podUID="9b4cdf2e-e6fc-4d88-99f1-31066805f915"
	
	
	==> storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] <==
	I0719 15:48:12.360500       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 15:48:42.363568       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] <==
	I0719 15:48:43.101018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:48:43.112535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:48:43.112627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 15:49:00.513710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 15:49:00.513880       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-601445_9cc5af00-1d19-4faa-a45a-a37e0574e41a!
	I0719 15:49:00.515154       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d52cd4ec-cef5-457d-bdf9-faf7f2a7401c", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-601445_9cc5af00-1d19-4faa-a45a-a37e0574e41a became leader
	I0719 15:49:00.614362       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-601445_9cc5af00-1d19-4faa-a45a-a37e0574e41a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-h7hgv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 describe pod metrics-server-569cc877fc-h7hgv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-601445 describe pod metrics-server-569cc877fc-h7hgv: exit status 1 (67.395928ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-h7hgv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-601445 describe pod metrics-server-569cc877fc-h7hgv: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (528.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (330.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817144 -n embed-certs-817144
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-19 16:07:32.98020312 +0000 UTC m=+6427.775789670
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-817144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-817144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.869µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-817144 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-817144 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-817144 logs -n 25: (1.291291367s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 16:06 UTC | 19 Jul 24 16:06 UTC |
	| start   | -p newest-cni-850417 --memory=2200 --alsologtostderr   | newest-cni-850417            | jenkins | v1.33.1 | 19 Jul 24 16:06 UTC | 19 Jul 24 16:07 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-850417             | newest-cni-850417            | jenkins | v1.33.1 | 19 Jul 24 16:07 UTC | 19 Jul 24 16:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-850417                                   | newest-cni-850417            | jenkins | v1.33.1 | 19 Jul 24 16:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 16:07 UTC | 19 Jul 24 16:07 UTC |
	| start   | -p auto-526259 --memory=3072                           | auto-526259                  | jenkins | v1.33.1 | 19 Jul 24 16:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 16:07:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:07:29.911534   65692 out.go:291] Setting OutFile to fd 1 ...
	I0719 16:07:29.911757   65692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 16:07:29.911765   65692 out.go:304] Setting ErrFile to fd 2...
	I0719 16:07:29.911769   65692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 16:07:29.911931   65692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 16:07:29.912470   65692 out.go:298] Setting JSON to false
	I0719 16:07:29.913370   65692 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6596,"bootTime":1721398654,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 16:07:29.913424   65692 start.go:139] virtualization: kvm guest
	I0719 16:07:29.915656   65692 out.go:177] * [auto-526259] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 16:07:29.916931   65692 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 16:07:29.916986   65692 notify.go:220] Checking for updates...
	I0719 16:07:29.919061   65692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:07:29.920151   65692 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 16:07:29.921135   65692 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 16:07:29.922231   65692 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 16:07:29.923455   65692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:07:29.925076   65692 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:07:29.925182   65692 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 16:07:29.925290   65692 config.go:182] Loaded profile config "newest-cni-850417": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 16:07:29.925375   65692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 16:07:29.961509   65692 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 16:07:29.962587   65692 start.go:297] selected driver: kvm2
	I0719 16:07:29.962610   65692 start.go:901] validating driver "kvm2" against <nil>
	I0719 16:07:29.962623   65692 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:07:29.963615   65692 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:07:29.963736   65692 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 16:07:29.978976   65692 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 16:07:29.979018   65692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 16:07:29.979323   65692 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:07:29.979405   65692 cni.go:84] Creating CNI manager for ""
	I0719 16:07:29.979425   65692 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 16:07:29.979434   65692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:07:29.979506   65692 start.go:340] cluster config:
	{Name:auto-526259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-526259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 16:07:29.979656   65692 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:07:29.982348   65692 out.go:177] * Starting "auto-526259" primary control-plane node in "auto-526259" cluster
	I0719 16:07:29.983523   65692 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 16:07:29.983561   65692 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 16:07:29.983568   65692 cache.go:56] Caching tarball of preloaded images
	I0719 16:07:29.983678   65692 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 16:07:29.983690   65692 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 16:07:29.983779   65692 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/auto-526259/config.json ...
	I0719 16:07:29.983795   65692 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/auto-526259/config.json: {Name:mk66885821c544eb20d5495959da42f5ce2096c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:07:29.983926   65692 start.go:360] acquireMachinesLock for auto-526259: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:07:29.983965   65692 start.go:364] duration metric: took 27.101µs to acquireMachinesLock for "auto-526259"
	I0719 16:07:29.983980   65692 start.go:93] Provisioning new machine with config: &{Name:auto-526259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-526259 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 16:07:29.984031   65692 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.643951886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405253643926549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41435467-2c73-426a-a5b4-e8350b0921aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.644577755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd1dcd45-335b-412f-9621-639b2b4f06f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.644696257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd1dcd45-335b-412f-9621-639b2b4f06f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.644899126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd1dcd45-335b-412f-9621-639b2b4f06f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.688063726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4936da3d-f9c6-4652-a9c4-ee1f8a486993 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.688141870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4936da3d-f9c6-4652-a9c4-ee1f8a486993 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.689839111Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5bb8dde-cb70-4590-ac27-e0607566c529 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.690394476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405253690367192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5bb8dde-cb70-4590-ac27-e0607566c529 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.691052421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00fb7884-43fd-4d0e-b3ca-a399a3213fad name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.691276440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00fb7884-43fd-4d0e-b3ca-a399a3213fad name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.691513272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00fb7884-43fd-4d0e-b3ca-a399a3213fad name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.730143122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a187cf1f-1a33-4787-9c64-6eb0050d9165 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.730230036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a187cf1f-1a33-4787-9c64-6eb0050d9165 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.731484474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52ee413d-de99-4598-8b25-6c384a6d0a3d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.732042797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405253732019254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52ee413d-de99-4598-8b25-6c384a6d0a3d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.732517579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1639eca9-014f-4634-87f1-da95fe9e1bec name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.732571606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1639eca9-014f-4634-87f1-da95fe9e1bec name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.732812565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1639eca9-014f-4634-87f1-da95fe9e1bec name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.769870645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fdcd631-9205-4b78-9d7b-1ac333a20120 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.769967160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fdcd631-9205-4b78-9d7b-1ac333a20120 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.771521300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0f16ca5-0d38-48e2-9888-2e2a78064b07 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.772079929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405253772051762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0f16ca5-0d38-48e2-9888-2e2a78064b07 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.772882777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50f522f6-481d-4614-a94a-dccfe62f2c66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.772954788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50f522f6-481d-4614-a94a-dccfe62f2c66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:07:33 embed-certs-817144 crio[729]: time="2024-07-19 16:07:33.773252407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721404144933787959,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 3,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4856880cbbbc2e172941fbae3287528df54324eba50196e3e210b0ab33da3f08,PodSandboxId:342d3a594dfc12130336efd2eba1b01484a7833b927f7740fa49b94a42bbf867,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1721404124534391191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 796e5718-64e1-485b-b2eb-849dc0e300a3,},Annotations:map[string]string{io.kubernetes.container.hash: 22310489,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004,PodSandboxId:1df5728cfdc2ae8969452d93311821014956bdf6886ea14f81dd59a6448b8d2d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721404121857706887,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-n945p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e2090d-a652-4716-b47e-be8f3b3679fa,},Annotations:map[string]string{io.kubernetes.container.hash: e8d68233,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff,PodSandboxId:3dd1229229ae7a771ea8692632aa59fea27e1b4f816949530416df36e5ca25e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721404114080722817,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
dd14f391-0850-487a-b394-4e243265e2ae,},Annotations:map[string]string{io.kubernetes.container.hash: 46c31450,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32,PodSandboxId:56445467b2f3bb30bfb897b52ba169bdbe63a51b0f739bf4e4e41d45f4bc68b9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721404114056402624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4d4g9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93ffa175-3bfe-4477-be1a-82238d78b
186,},Annotations:map[string]string{io.kubernetes.container.hash: 57e8ab43,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2,PodSandboxId:322c44f99e83b4cfb6edf548814957e165c8b0089dca5fb13c748349d2e2b01b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721404109686110763,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da72dd8ded532d149359d0db271f816e,},Annotations:map[string]string{io.kub
ernetes.container.hash: fcd35078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10,PodSandboxId:9edba2c45444273afa4e75bb323c622dbdbb0620bab6bda3065d4edf531072eb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721404109678128663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f41a70830e93a29bad5864aeef6614c8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676,PodSandboxId:d2f3f20eab4c04a604367bca5c616880dc673c2d0a7a23d6e7373699845f8b47,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721404109585864533,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8cc6a042b3421c01d3088ba645828a4,},Annotations:map[string]string{io.kubernetes.container.hash:
a38d6ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56,PodSandboxId:1ddec419a83ee180874f2ac3dbd93cf011f1d7ed0d217e8cee8d667d09b1bd4d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721404109540897848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-817144,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6465087775604f43887857c10622be32,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50f522f6-481d-4614-a94a-dccfe62f2c66 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	33ca90d25224c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   3dd1229229ae7       storage-provisioner
	4856880cbbbc2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   342d3a594dfc1       busybox
	79faf7b7b4478       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   1df5728cfdc2a       coredns-7db6d8ff4d-n945p
	4ab77ba1bf35a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       2                   3dd1229229ae7       storage-provisioner
	760d42fba7d1a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      18 minutes ago      Running             kube-proxy                1                   56445467b2f3b       kube-proxy-4d4g9
	b5cdfd8260b76       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   322c44f99e83b       etcd-embed-certs-817144
	f82d9ede0d89b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      19 minutes ago      Running             kube-scheduler            1                   9edba2c454442       kube-scheduler-embed-certs-817144
	e92e20675555d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      19 minutes ago      Running             kube-apiserver            1                   d2f3f20eab4c0       kube-apiserver-embed-certs-817144
	4c26eb67ddb9a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      19 minutes ago      Running             kube-controller-manager   1                   1ddec419a83ee       kube-controller-manager-embed-certs-817144
	
	
	==> coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44502 - 40098 "HINFO IN 710626314888396658.4129644044510388121. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023242924s
	
	
	==> describe nodes <==
	Name:               embed-certs-817144
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-817144
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=embed-certs-817144
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T15_38_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 15:38:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-817144
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 16:07:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 16:04:21 +0000   Fri, 19 Jul 2024 15:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 16:04:21 +0000   Fri, 19 Jul 2024 15:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 16:04:21 +0000   Fri, 19 Jul 2024 15:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 16:04:21 +0000   Fri, 19 Jul 2024 15:48:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.37
	  Hostname:    embed-certs-817144
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 437afac46acd4383a51d50d4eabced8c
	  System UUID:                437afac4-6acd-4383-a51d-50d4eabced8c
	  Boot ID:                    c07ff149-525e-46a5-8746-fb724d8ffcc8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-7db6d8ff4d-n945p                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-817144                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-817144             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-817144    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-4d4g9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-817144             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-2tsch               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node embed-certs-817144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-817144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-817144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-817144 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node embed-certs-817144 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-817144 event: Registered Node embed-certs-817144 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-817144 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-817144 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-817144 event: Registered Node embed-certs-817144 in Controller
	
	
	==> dmesg <==
	[Jul19 15:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051481] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758070] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.325203] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619492] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.079673] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.062225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071331] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.205090] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.123084] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.292113] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.454412] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.063316] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.967858] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.600107] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.940097] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.806844] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.458162] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] <==
	{"level":"info","ts":"2024-07-19T15:48:30.156536Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T15:48:30.158405Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.37:2380"}
	{"level":"info","ts":"2024-07-19T15:48:30.158434Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.37:2380"}
	{"level":"info","ts":"2024-07-19T15:48:30.159064Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"553457f1c6c22d03","initial-advertise-peer-urls":["https://192.168.72.37:2380"],"listen-peer-urls":["https://192.168.72.37:2380"],"advertise-client-urls":["https://192.168.72.37:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.37:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T15:48:30.159112Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T15:48:31.401249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T15:48:31.401304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T15:48:31.401361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 received MsgPreVoteResp from 553457f1c6c22d03 at term 2"}
	{"level":"info","ts":"2024-07-19T15:48:31.401376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.401382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 received MsgVoteResp from 553457f1c6c22d03 at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.401391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"553457f1c6c22d03 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.401412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 553457f1c6c22d03 elected leader 553457f1c6c22d03 at term 3"}
	{"level":"info","ts":"2024-07-19T15:48:31.403059Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"553457f1c6c22d03","local-member-attributes":"{Name:embed-certs-817144 ClientURLs:[https://192.168.72.37:2379]}","request-path":"/0/members/553457f1c6c22d03/attributes","cluster-id":"ea1c0389329f2e90","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T15:48:31.403106Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:48:31.403301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T15:48:31.403353Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T15:48:31.403388Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T15:48:31.405072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.37:2379"}
	{"level":"info","ts":"2024-07-19T15:48:31.405576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T15:58:31.434089Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":864}
	{"level":"info","ts":"2024-07-19T15:58:31.44487Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":864,"took":"9.975076ms","hash":1624098574,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-07-19T15:58:31.444952Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1624098574,"revision":864,"compact-revision":-1}
	{"level":"info","ts":"2024-07-19T16:03:31.44112Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1107}
	{"level":"info","ts":"2024-07-19T16:03:31.444846Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1107,"took":"3.41942ms","hash":1784320579,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-19T16:03:31.444893Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1784320579,"revision":1107,"compact-revision":864}
	
	
	==> kernel <==
	 16:07:34 up 19 min,  0 users,  load average: 0.23, 0.11, 0.05
	Linux embed-certs-817144 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] <==
	I0719 16:01:33.648326       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:03:32.651567       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:03:32.651914       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0719 16:03:33.652679       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:03:33.652768       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:03:33.652775       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:03:33.652689       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:03:33.652801       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:03:33.653892       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:04:33.653876       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:04:33.653967       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:04:33.653975       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:04:33.654021       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:04:33.654033       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:04:33.656103       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:06:33.654847       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:06:33.655107       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 16:06:33.655122       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0719 16:06:33.657261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0719 16:06:33.657422       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0719 16:06:33.657490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] <==
	I0719 16:01:45.881394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:02:15.402026       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:02:15.889863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:02:45.406736       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:02:45.898300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:03:15.412145       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:03:15.906747       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:03:45.417097       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:03:45.914313       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:04:15.421416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:04:15.921183       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:04:45.426965       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:04:45.738438       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="220.416µs"
	I0719 16:04:45.929685       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0719 16:04:57.738025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="113.296µs"
	E0719 16:05:15.431959       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:05:15.939423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:05:45.437152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:05:45.947601       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:06:15.442490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:06:15.955884       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:06:45.448884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:06:45.965278       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0719 16:07:15.454887       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0719 16:07:15.976064       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] <==
	I0719 15:48:34.241079       1 server_linux.go:69] "Using iptables proxy"
	I0719 15:48:34.258100       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.37"]
	I0719 15:48:34.318820       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 15:48:34.318856       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 15:48:34.318871       1 server_linux.go:165] "Using iptables Proxier"
	I0719 15:48:34.321474       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 15:48:34.321780       1 server.go:872] "Version info" version="v1.30.3"
	I0719 15:48:34.321810       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:34.324352       1 config.go:192] "Starting service config controller"
	I0719 15:48:34.324390       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 15:48:34.324414       1 config.go:101] "Starting endpoint slice config controller"
	I0719 15:48:34.324418       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 15:48:34.327056       1 config.go:319] "Starting node config controller"
	I0719 15:48:34.327067       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 15:48:34.424912       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 15:48:34.425017       1 shared_informer.go:320] Caches are synced for service config
	I0719 15:48:34.427901       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] <==
	I0719 15:48:30.831312       1 serving.go:380] Generated self-signed cert in-memory
	I0719 15:48:32.738517       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 15:48:32.739959       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 15:48:32.757455       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 15:48:32.758057       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0719 15:48:32.758109       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0719 15:48:32.758153       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 15:48:32.758812       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 15:48:32.767429       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:48:32.760123       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0719 15:48:32.769686       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0719 15:48:32.859266       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0719 15:48:32.867985       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 15:48:32.870084       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Jul 19 16:05:26 embed-certs-817144 kubelet[940]: E0719 16:05:26.721801     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:05:28 embed-certs-817144 kubelet[940]: E0719 16:05:28.747148     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:05:28 embed-certs-817144 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:05:28 embed-certs-817144 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:05:28 embed-certs-817144 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:05:28 embed-certs-817144 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:05:40 embed-certs-817144 kubelet[940]: E0719 16:05:40.721223     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:05:55 embed-certs-817144 kubelet[940]: E0719 16:05:55.721218     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:06:09 embed-certs-817144 kubelet[940]: E0719 16:06:09.721990     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:06:22 embed-certs-817144 kubelet[940]: E0719 16:06:22.720679     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:06:28 embed-certs-817144 kubelet[940]: E0719 16:06:28.748422     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:06:28 embed-certs-817144 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:06:28 embed-certs-817144 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:06:28 embed-certs-817144 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:06:28 embed-certs-817144 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:06:36 embed-certs-817144 kubelet[940]: E0719 16:06:36.722664     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:06:51 embed-certs-817144 kubelet[940]: E0719 16:06:51.721750     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:07:06 embed-certs-817144 kubelet[940]: E0719 16:07:06.721526     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:07:21 embed-certs-817144 kubelet[940]: E0719 16:07:21.721031     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	Jul 19 16:07:28 embed-certs-817144 kubelet[940]: E0719 16:07:28.748257     940 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 16:07:28 embed-certs-817144 kubelet[940]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 16:07:28 embed-certs-817144 kubelet[940]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 16:07:28 embed-certs-817144 kubelet[940]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 16:07:28 embed-certs-817144 kubelet[940]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 16:07:32 embed-certs-817144 kubelet[940]: E0719 16:07:32.729686     940 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-2tsch" podUID="809cb05e-d781-476e-a84b-dd009d044ac5"
	
	
	==> storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] <==
	I0719 15:49:05.033921       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 15:49:05.044590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 15:49:05.044954       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 15:49:22.449913       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 15:49:22.450791       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-817144_acfafd32-0778-426e-b469-e48471974d10!
	I0719 15:49:22.452435       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6981e41-8a53-4193-8752-45c6c930dbfe", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-817144_acfafd32-0778-426e-b469-e48471974d10 became leader
	I0719 15:49:22.551709       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-817144_acfafd32-0778-426e-b469-e48471974d10!
	
	
	==> storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] <==
	I0719 15:48:34.206575       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 15:49:04.214828       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-817144 -n embed-certs-817144
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-817144 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-2tsch
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-817144 describe pod metrics-server-569cc877fc-2tsch
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-817144 describe pod metrics-server-569cc877fc-2tsch: exit status 1 (66.793765ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-2tsch" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-817144 describe pod metrics-server-569cc877fc-2tsch: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (330.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (104.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
E0719 16:05:32.082035   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.102:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.102:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (222.708826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-862924" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-862924 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-862924 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.393µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-862924 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (217.243177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-862924 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-862924 logs -n 25: (1.674500404s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-127438 -- sudo                         | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-127438                                 | cert-options-127438          | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-574044                           | kubernetes-upgrade-574044    | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:37 UTC |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:37 UTC | 19 Jul 24 15:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-817144            | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-382231             | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-382231                                   | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:39 UTC | 19 Jul 24 15:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-939600                              | cert-expiration-939600       | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	| delete  | -p                                                     | disable-driver-mounts-885817 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:40 UTC |
	|         | disable-driver-mounts-885817                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:40 UTC | 19 Jul 24 15:41 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-862924        | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:41 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-601445  | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-817144                 | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-382231                  | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-817144                                  | embed-certs-817144           | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| start   | -p no-preload-382231 --memory=2200                     | no-preload-382231            | jenkins | v1.33.1 | 19 Jul 24 15:42 UTC | 19 Jul 24 15:52 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-862924             | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC | 19 Jul 24 15:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-862924                              | old-k8s-version-862924       | jenkins | v1.33.1 | 19 Jul 24 15:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-601445       | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-601445 | jenkins | v1.33.1 | 19 Jul 24 15:44 UTC | 19 Jul 24 15:52 UTC |
	|         | default-k8s-diff-port-601445                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 15:44:39
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:44:39.385142   59208 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:44:39.385249   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385257   59208 out.go:304] Setting ErrFile to fd 2...
	I0719 15:44:39.385261   59208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:44:39.385405   59208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:44:39.385919   59208 out.go:298] Setting JSON to false
	I0719 15:44:39.386767   59208 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5225,"bootTime":1721398654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:44:39.386817   59208 start.go:139] virtualization: kvm guest
	I0719 15:44:39.390104   59208 out.go:177] * [default-k8s-diff-port-601445] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:44:39.391867   59208 notify.go:220] Checking for updates...
	I0719 15:44:39.391890   59208 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:44:39.393463   59208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:44:39.394883   59208 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:44:39.396081   59208 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:44:39.397280   59208 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:44:39.398540   59208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:44:39.400177   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:44:39.400543   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.400601   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.415749   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
	I0719 15:44:39.416104   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.416644   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.416664   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.416981   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.417206   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.417443   59208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:44:39.417751   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:44:39.417793   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:44:39.432550   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0719 15:44:39.433003   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:44:39.433478   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:44:39.433504   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:44:39.433836   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:44:39.434083   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:44:39.467474   59208 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 15:44:38.674498   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:39.468897   59208 start.go:297] selected driver: kvm2
	I0719 15:44:39.468921   59208 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.469073   59208 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:44:39.470083   59208 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.470178   59208 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 15:44:39.485232   59208 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 15:44:39.485586   59208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:44:39.485616   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:44:39.485624   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:44:39.485666   59208 start.go:340] cluster config:
	{Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:44:39.485752   59208 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:44:39.487537   59208 out.go:177] * Starting "default-k8s-diff-port-601445" primary control-plane node in "default-k8s-diff-port-601445" cluster
	I0719 15:44:39.488672   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:44:39.488709   59208 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 15:44:39.488718   59208 cache.go:56] Caching tarball of preloaded images
	I0719 15:44:39.488795   59208 preload.go:172] Found /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 15:44:39.488807   59208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 15:44:39.488895   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:44:39.489065   59208 start.go:360] acquireMachinesLock for default-k8s-diff-port-601445: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:44:41.746585   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:47.826521   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:50.898507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:44:56.978531   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:00.050437   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:06.130631   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:09.202570   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:15.282481   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:18.354537   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:24.434488   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:27.506515   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:33.586522   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:36.658503   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:42.738573   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:45.810538   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:51.890547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:45:54.962507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:01.042509   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:04.114621   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:10.194576   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:13.266450   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:19.346524   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:22.418506   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:28.498553   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:31.570507   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:37.650477   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:40.722569   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:46.802495   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:49.874579   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:55.954547   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:46:59.026454   58376 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.37:22: connect: no route to host
	I0719 15:47:02.030619   58417 start.go:364] duration metric: took 4m36.939495617s to acquireMachinesLock for "no-preload-382231"
	I0719 15:47:02.030679   58417 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:02.030685   58417 fix.go:54] fixHost starting: 
	I0719 15:47:02.031010   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:02.031039   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:02.046256   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43525
	I0719 15:47:02.046682   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:02.047151   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:47:02.047178   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:02.047573   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:02.047818   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:02.048023   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:47:02.049619   58417 fix.go:112] recreateIfNeeded on no-preload-382231: state=Stopped err=<nil>
	I0719 15:47:02.049641   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	W0719 15:47:02.049785   58417 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:02.051800   58417 out.go:177] * Restarting existing kvm2 VM for "no-preload-382231" ...
	I0719 15:47:02.028090   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:02.028137   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028489   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:47:02.028517   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:47:02.028696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:47:02.030488   58376 machine.go:97] duration metric: took 4m37.428160404s to provisionDockerMachine
	I0719 15:47:02.030529   58376 fix.go:56] duration metric: took 4m37.450063037s for fixHost
	I0719 15:47:02.030535   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 4m37.450081944s
	W0719 15:47:02.030559   58376 start.go:714] error starting host: provision: host is not running
	W0719 15:47:02.030673   58376 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0719 15:47:02.030686   58376 start.go:729] Will try again in 5 seconds ...
	I0719 15:47:02.053160   58417 main.go:141] libmachine: (no-preload-382231) Calling .Start
	I0719 15:47:02.053325   58417 main.go:141] libmachine: (no-preload-382231) Ensuring networks are active...
	I0719 15:47:02.054289   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network default is active
	I0719 15:47:02.054786   58417 main.go:141] libmachine: (no-preload-382231) Ensuring network mk-no-preload-382231 is active
	I0719 15:47:02.055259   58417 main.go:141] libmachine: (no-preload-382231) Getting domain xml...
	I0719 15:47:02.056202   58417 main.go:141] libmachine: (no-preload-382231) Creating domain...
	I0719 15:47:03.270495   58417 main.go:141] libmachine: (no-preload-382231) Waiting to get IP...
	I0719 15:47:03.271595   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.272074   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.272151   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.272057   59713 retry.go:31] will retry after 239.502065ms: waiting for machine to come up
	I0719 15:47:03.513745   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.514224   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.514264   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.514191   59713 retry.go:31] will retry after 315.982717ms: waiting for machine to come up
	I0719 15:47:03.831739   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:03.832155   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:03.832187   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:03.832111   59713 retry.go:31] will retry after 468.820113ms: waiting for machine to come up
	I0719 15:47:04.302865   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.303273   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.303306   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.303236   59713 retry.go:31] will retry after 526.764683ms: waiting for machine to come up
	I0719 15:47:04.832048   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:04.832551   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:04.832583   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:04.832504   59713 retry.go:31] will retry after 754.533212ms: waiting for machine to come up
	I0719 15:47:07.032310   58376 start.go:360] acquireMachinesLock for embed-certs-817144: {Name:mk707c0f2200ec1e3ce6b294507d2f417bea5c9a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:47:05.588374   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:05.588834   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:05.588862   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:05.588785   59713 retry.go:31] will retry after 757.18401ms: waiting for machine to come up
	I0719 15:47:06.347691   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:06.348135   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:06.348164   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:06.348053   59713 retry.go:31] will retry after 1.097437331s: waiting for machine to come up
	I0719 15:47:07.446836   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:07.447199   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:07.447219   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:07.447158   59713 retry.go:31] will retry after 1.448513766s: waiting for machine to come up
	I0719 15:47:08.897886   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:08.898289   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:08.898317   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:08.898216   59713 retry.go:31] will retry after 1.583843671s: waiting for machine to come up
	I0719 15:47:10.483476   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:10.483934   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:10.483963   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:10.483864   59713 retry.go:31] will retry after 1.86995909s: waiting for machine to come up
	I0719 15:47:12.355401   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:12.355802   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:12.355827   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:12.355762   59713 retry.go:31] will retry after 2.577908462s: waiting for machine to come up
	I0719 15:47:14.934837   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:14.935263   58417 main.go:141] libmachine: (no-preload-382231) DBG | unable to find current IP address of domain no-preload-382231 in network mk-no-preload-382231
	I0719 15:47:14.935285   58417 main.go:141] libmachine: (no-preload-382231) DBG | I0719 15:47:14.935225   59713 retry.go:31] will retry after 3.158958575s: waiting for machine to come up
	I0719 15:47:19.278747   58817 start.go:364] duration metric: took 3m55.914249116s to acquireMachinesLock for "old-k8s-version-862924"
	I0719 15:47:19.278822   58817 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:19.278831   58817 fix.go:54] fixHost starting: 
	I0719 15:47:19.279163   58817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:19.279196   58817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:19.294722   58817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0719 15:47:19.295092   58817 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:19.295537   58817 main.go:141] libmachine: Using API Version  1
	I0719 15:47:19.295561   58817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:19.295950   58817 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:19.296186   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:19.296333   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetState
	I0719 15:47:19.297864   58817 fix.go:112] recreateIfNeeded on old-k8s-version-862924: state=Stopped err=<nil>
	I0719 15:47:19.297895   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	W0719 15:47:19.298077   58817 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:19.300041   58817 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-862924" ...
	I0719 15:47:18.095456   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095912   58417 main.go:141] libmachine: (no-preload-382231) Found IP for machine: 192.168.39.227
	I0719 15:47:18.095936   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has current primary IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.095942   58417 main.go:141] libmachine: (no-preload-382231) Reserving static IP address...
	I0719 15:47:18.096317   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.096357   58417 main.go:141] libmachine: (no-preload-382231) Reserved static IP address: 192.168.39.227
	I0719 15:47:18.096376   58417 main.go:141] libmachine: (no-preload-382231) DBG | skip adding static IP to network mk-no-preload-382231 - found existing host DHCP lease matching {name: "no-preload-382231", mac: "52:54:00:72:09:0a", ip: "192.168.39.227"}
	I0719 15:47:18.096392   58417 main.go:141] libmachine: (no-preload-382231) DBG | Getting to WaitForSSH function...
	I0719 15:47:18.096407   58417 main.go:141] libmachine: (no-preload-382231) Waiting for SSH to be available...
	I0719 15:47:18.098619   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.098978   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.099008   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.099122   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH client type: external
	I0719 15:47:18.099151   58417 main.go:141] libmachine: (no-preload-382231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa (-rw-------)
	I0719 15:47:18.099183   58417 main.go:141] libmachine: (no-preload-382231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:18.099196   58417 main.go:141] libmachine: (no-preload-382231) DBG | About to run SSH command:
	I0719 15:47:18.099210   58417 main.go:141] libmachine: (no-preload-382231) DBG | exit 0
	I0719 15:47:18.222285   58417 main.go:141] libmachine: (no-preload-382231) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:18.222607   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetConfigRaw
	I0719 15:47:18.223181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.225751   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226062   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.226105   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.226327   58417 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/config.json ...
	I0719 15:47:18.226504   58417 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:18.226520   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:18.226684   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.228592   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.228936   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.228960   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.229094   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.229246   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229398   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.229516   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.229663   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.229887   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.229901   58417 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:18.330731   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:18.330764   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331053   58417 buildroot.go:166] provisioning hostname "no-preload-382231"
	I0719 15:47:18.331084   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.331282   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.333905   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334212   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.334270   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.334331   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.334510   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334705   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.334850   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.335030   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.335216   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.335230   58417 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-382231 && echo "no-preload-382231" | sudo tee /etc/hostname
	I0719 15:47:18.453128   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-382231
	
	I0719 15:47:18.453151   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.455964   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456323   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.456349   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.456549   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.456822   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457010   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.457158   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.457300   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.457535   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.457561   58417 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-382231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-382231/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-382231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:18.568852   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:18.568878   58417 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:18.568902   58417 buildroot.go:174] setting up certificates
	I0719 15:47:18.568915   58417 provision.go:84] configureAuth start
	I0719 15:47:18.568924   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetMachineName
	I0719 15:47:18.569240   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:18.571473   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.571757   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.571783   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.572029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.573941   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574213   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.574247   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.574393   58417 provision.go:143] copyHostCerts
	I0719 15:47:18.574455   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:18.574465   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:18.574528   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:18.574615   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:18.574622   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:18.574645   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:18.574696   58417 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:18.574703   58417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:18.574722   58417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:18.574768   58417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.no-preload-382231 san=[127.0.0.1 192.168.39.227 localhost minikube no-preload-382231]
	I0719 15:47:18.636408   58417 provision.go:177] copyRemoteCerts
	I0719 15:47:18.636458   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:18.636477   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.638719   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639021   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.639054   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.639191   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.639379   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.639532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.639795   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:18.720305   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:18.742906   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:18.764937   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:47:18.787183   58417 provision.go:87] duration metric: took 218.257504ms to configureAuth
	I0719 15:47:18.787205   58417 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:18.787355   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 15:47:18.787418   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:18.789685   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.789992   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:18.790017   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:18.790181   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:18.790366   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790532   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:18.790632   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:18.790770   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:18.790929   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:18.790943   58417 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:19.053326   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:19.053350   58417 machine.go:97] duration metric: took 826.83404ms to provisionDockerMachine
	I0719 15:47:19.053364   58417 start.go:293] postStartSetup for "no-preload-382231" (driver="kvm2")
	I0719 15:47:19.053379   58417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:19.053409   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.053733   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:19.053755   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.056355   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056709   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.056737   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.056884   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.057037   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.057172   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.057370   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.136785   58417 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:19.140756   58417 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:19.140777   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:19.140847   58417 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:19.140941   58417 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:19.141044   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:19.150247   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:19.172800   58417 start.go:296] duration metric: took 119.424607ms for postStartSetup
	I0719 15:47:19.172832   58417 fix.go:56] duration metric: took 17.142146552s for fixHost
	I0719 15:47:19.172849   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.175427   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.175816   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.175851   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.176027   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.176281   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.176636   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.176892   58417 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:19.177051   58417 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0719 15:47:19.177061   58417 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:19.278564   58417 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404039.251890495
	
	I0719 15:47:19.278594   58417 fix.go:216] guest clock: 1721404039.251890495
	I0719 15:47:19.278605   58417 fix.go:229] Guest: 2024-07-19 15:47:19.251890495 +0000 UTC Remote: 2024-07-19 15:47:19.172835531 +0000 UTC m=+294.220034318 (delta=79.054964ms)
	I0719 15:47:19.278651   58417 fix.go:200] guest clock delta is within tolerance: 79.054964ms
	I0719 15:47:19.278659   58417 start.go:83] releasing machines lock for "no-preload-382231", held for 17.247997118s
	I0719 15:47:19.278692   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.279029   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:19.281674   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282034   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.282063   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.282221   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282750   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282935   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:47:19.282991   58417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:19.283061   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.283095   58417 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:19.283116   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:47:19.285509   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285805   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.285828   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285846   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.285959   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286182   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286276   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:19.286300   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:19.286468   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:47:19.286481   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286632   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.286672   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:47:19.286806   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:47:19.286935   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:47:19.363444   58417 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:19.387514   58417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:19.545902   58417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:19.551747   58417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:19.551812   58417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:19.568563   58417 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:19.568589   58417 start.go:495] detecting cgroup driver to use...
	I0719 15:47:19.568654   58417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:19.589440   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:19.604889   58417 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:19.604962   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:19.624114   58417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:19.638265   58417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:19.752880   58417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:19.900078   58417 docker.go:233] disabling docker service ...
	I0719 15:47:19.900132   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:19.914990   58417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:19.928976   58417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:20.079363   58417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:20.203629   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:20.218502   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:20.237028   58417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 15:47:20.237089   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.248514   58417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:20.248597   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.260162   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.272166   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.283341   58417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:20.294687   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.305495   58417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.328024   58417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:20.339666   58417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:20.349271   58417 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:20.349314   58417 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:20.364130   58417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:20.376267   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:20.501259   58417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:20.643763   58417 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:20.643828   58417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:20.648525   58417 start.go:563] Will wait 60s for crictl version
	I0719 15:47:20.648586   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:20.652256   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:20.689386   58417 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:20.689468   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.720662   58417 ssh_runner.go:195] Run: crio --version
	I0719 15:47:20.751393   58417 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 15:47:19.301467   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .Start
	I0719 15:47:19.301647   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring networks are active...
	I0719 15:47:19.302430   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network default is active
	I0719 15:47:19.302790   58817 main.go:141] libmachine: (old-k8s-version-862924) Ensuring network mk-old-k8s-version-862924 is active
	I0719 15:47:19.303288   58817 main.go:141] libmachine: (old-k8s-version-862924) Getting domain xml...
	I0719 15:47:19.304087   58817 main.go:141] libmachine: (old-k8s-version-862924) Creating domain...
	I0719 15:47:20.540210   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting to get IP...
	I0719 15:47:20.541173   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.541580   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.541657   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.541560   59851 retry.go:31] will retry after 276.525447ms: waiting for machine to come up
	I0719 15:47:20.820097   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:20.820549   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:20.820577   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:20.820512   59851 retry.go:31] will retry after 350.128419ms: waiting for machine to come up
	I0719 15:47:21.172277   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.172787   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.172814   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.172742   59851 retry.go:31] will retry after 437.780791ms: waiting for machine to come up
	I0719 15:47:21.612338   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:21.612766   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:21.612796   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:21.612710   59851 retry.go:31] will retry after 607.044351ms: waiting for machine to come up
	I0719 15:47:22.221152   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.221715   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.221755   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.221589   59851 retry.go:31] will retry after 568.388882ms: waiting for machine to come up
	I0719 15:47:22.791499   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:22.791966   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:22.791996   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:22.791912   59851 retry.go:31] will retry after 786.805254ms: waiting for machine to come up
	I0719 15:47:20.752939   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetIP
	I0719 15:47:20.755996   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756367   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:47:20.756395   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:47:20.756723   58417 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:20.760962   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:20.776973   58417 kubeadm.go:883] updating cluster {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:20.777084   58417 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 15:47:20.777120   58417 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:20.814520   58417 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0719 15:47:20.814547   58417 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:20.814631   58417 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:20.814650   58417 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.814657   58417 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.814682   58417 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.814637   58417 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.814736   58417 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.814808   58417 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.814742   58417 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:20.816417   58417 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:20.816435   58417 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:20.816446   58417 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:20.816513   58417 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:20.816535   58417 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 15:47:20.816559   58417 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:20.816719   58417 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:21.003845   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0719 15:47:21.028954   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.039628   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.041391   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.065499   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.084966   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.142812   58417 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0719 15:47:21.142873   58417 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0719 15:47:21.142905   58417 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.142921   58417 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0719 15:47:21.142939   58417 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.142962   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142877   58417 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.143025   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.142983   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.160141   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.182875   58417 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0719 15:47:21.182918   58417 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.182945   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 15:47:21.182958   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.182957   58417 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0719 15:47:21.182992   58417 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.183029   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.183044   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0719 15:47:21.183064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0719 15:47:21.272688   58417 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0719 15:47:21.272724   58417 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.272768   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:21.272783   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272825   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 15:47:21.272876   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.272906   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 15:47:21.272931   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 15:47:21.272971   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:21.272997   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 15:47:21.273064   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:21.326354   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326356   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 15:47:21.326441   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0719 15:47:21.326457   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326459   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:21.326492   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0719 15:47:21.326497   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.326529   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0719 15:47:21.326535   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0719 15:47:21.326633   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:21.363401   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:21.363496   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:22.268448   58417 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.010876   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.684346805s)
	I0719 15:47:24.010910   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0719 15:47:24.010920   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.684439864s)
	I0719 15:47:24.010952   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0719 15:47:24.010930   58417 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.010993   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.684342001s)
	I0719 15:47:24.011014   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0719 15:47:24.011019   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011046   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.647533327s)
	I0719 15:47:24.011066   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0719 15:47:24.011098   58417 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.742620594s)
	I0719 15:47:24.011137   58417 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0719 15:47:24.011170   58417 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:24.011204   58417 ssh_runner.go:195] Run: which crictl
	I0719 15:47:23.580485   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:23.580950   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:23.580983   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:23.580876   59851 retry.go:31] will retry after 919.322539ms: waiting for machine to come up
	I0719 15:47:24.502381   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:24.502817   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:24.502844   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:24.502776   59851 retry.go:31] will retry after 1.142581835s: waiting for machine to come up
	I0719 15:47:25.647200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:25.647663   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:25.647693   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:25.647559   59851 retry.go:31] will retry after 1.682329055s: waiting for machine to come up
	I0719 15:47:27.332531   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:27.333052   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:27.333080   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:27.333003   59851 retry.go:31] will retry after 1.579786507s: waiting for machine to come up
	I0719 15:47:27.292973   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.281931356s)
	I0719 15:47:27.293008   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0719 15:47:27.293001   58417 ssh_runner.go:235] Completed: which crictl: (3.281778521s)
	I0719 15:47:27.293043   58417 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:27.293064   58417 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:27.293086   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0719 15:47:29.269642   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.976526914s)
	I0719 15:47:29.269676   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0719 15:47:29.269698   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269641   58417 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.97655096s)
	I0719 15:47:29.269748   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0719 15:47:29.269773   58417 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 15:47:29.269875   58417 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:28.914628   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:28.915181   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:28.915221   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:28.915127   59851 retry.go:31] will retry after 2.156491688s: waiting for machine to come up
	I0719 15:47:31.073521   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:31.074101   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:31.074136   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:31.074039   59851 retry.go:31] will retry after 2.252021853s: waiting for machine to come up
	I0719 15:47:31.242199   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.972421845s)
	I0719 15:47:31.242257   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0719 15:47:31.242273   58417 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.972374564s)
	I0719 15:47:31.242283   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:31.242306   58417 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0719 15:47:31.242334   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0719 15:47:32.592736   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.350379333s)
	I0719 15:47:32.592762   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0719 15:47:32.592782   58417 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:32.592817   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0719 15:47:34.547084   58417 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.954243196s)
	I0719 15:47:34.547122   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0719 15:47:34.547155   58417 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:34.547231   58417 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0719 15:47:33.328344   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:33.328815   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | unable to find current IP address of domain old-k8s-version-862924 in network mk-old-k8s-version-862924
	I0719 15:47:33.328849   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | I0719 15:47:33.328779   59851 retry.go:31] will retry after 4.118454422s: waiting for machine to come up
	I0719 15:47:37.451169   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.451651   58817 main.go:141] libmachine: (old-k8s-version-862924) Found IP for machine: 192.168.50.102
	I0719 15:47:37.451677   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserving static IP address...
	I0719 15:47:37.451691   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has current primary IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.452205   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.452240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | skip adding static IP to network mk-old-k8s-version-862924 - found existing host DHCP lease matching {name: "old-k8s-version-862924", mac: "52:54:00:36:d7:f3", ip: "192.168.50.102"}
	I0719 15:47:37.452258   58817 main.go:141] libmachine: (old-k8s-version-862924) Reserved static IP address: 192.168.50.102
	I0719 15:47:37.452276   58817 main.go:141] libmachine: (old-k8s-version-862924) Waiting for SSH to be available...
	I0719 15:47:37.452287   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Getting to WaitForSSH function...
	I0719 15:47:37.454636   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455004   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.455043   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.455210   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH client type: external
	I0719 15:47:37.455242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa (-rw-------)
	I0719 15:47:37.455284   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:37.455302   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | About to run SSH command:
	I0719 15:47:37.455316   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | exit 0
	I0719 15:47:37.583375   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:37.583754   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetConfigRaw
	I0719 15:47:37.584481   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.587242   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587644   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.587668   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.587961   58817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/config.json ...
	I0719 15:47:37.588195   58817 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:37.588217   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:37.588446   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.590801   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591137   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.591166   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.591308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.591471   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591592   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.591736   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.591896   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.592100   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.592111   58817 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:37.698760   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:37.698787   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699086   58817 buildroot.go:166] provisioning hostname "old-k8s-version-862924"
	I0719 15:47:37.699113   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.699326   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.701828   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702208   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.702253   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.702339   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.702508   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702674   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.702817   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.702983   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.703136   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.703147   58817 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-862924 && echo "old-k8s-version-862924" | sudo tee /etc/hostname
	I0719 15:47:37.823930   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-862924
	
	I0719 15:47:37.823960   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.826546   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.826875   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.826912   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.827043   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:37.827336   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827506   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:37.827690   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:37.827858   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:37.828039   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:37.828056   58817 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-862924' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-862924/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-862924' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:37.935860   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:37.935888   58817 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:37.935917   58817 buildroot.go:174] setting up certificates
	I0719 15:47:37.935927   58817 provision.go:84] configureAuth start
	I0719 15:47:37.935939   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetMachineName
	I0719 15:47:37.936223   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:37.938638   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.938990   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.939017   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.939170   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:37.941161   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941458   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:37.941487   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:37.941597   58817 provision.go:143] copyHostCerts
	I0719 15:47:37.941669   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:37.941682   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:37.941731   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:37.941824   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:37.941832   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:37.941850   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:37.941910   58817 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:37.941919   58817 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:37.941942   58817 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:37.942003   58817 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-862924 san=[127.0.0.1 192.168.50.102 localhost minikube old-k8s-version-862924]
	I0719 15:47:38.046717   58817 provision.go:177] copyRemoteCerts
	I0719 15:47:38.046770   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:38.046799   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.049240   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049578   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.049611   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.049806   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.050026   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.050200   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.050377   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.133032   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:38.157804   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0719 15:47:38.184189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:47:38.207761   58817 provision.go:87] duration metric: took 271.801669ms to configureAuth
	I0719 15:47:38.207801   58817 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:38.208023   58817 config.go:182] Loaded profile config "old-k8s-version-862924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:47:38.208148   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.211030   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211467   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.211497   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.211675   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.211851   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212046   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.212195   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.212374   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.212556   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.212578   58817 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:38.759098   59208 start.go:364] duration metric: took 2m59.27000152s to acquireMachinesLock for "default-k8s-diff-port-601445"
	I0719 15:47:38.759165   59208 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:38.759176   59208 fix.go:54] fixHost starting: 
	I0719 15:47:38.759633   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:38.759685   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:38.779587   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0719 15:47:38.779979   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:38.780480   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:47:38.780497   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:38.780888   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:38.781129   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:38.781260   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:47:38.782786   59208 fix.go:112] recreateIfNeeded on default-k8s-diff-port-601445: state=Stopped err=<nil>
	I0719 15:47:38.782860   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	W0719 15:47:38.783056   59208 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:38.785037   59208 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-601445" ...
	I0719 15:47:38.786497   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Start
	I0719 15:47:38.786691   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring networks are active...
	I0719 15:47:38.787520   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network default is active
	I0719 15:47:38.787819   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Ensuring network mk-default-k8s-diff-port-601445 is active
	I0719 15:47:38.788418   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Getting domain xml...
	I0719 15:47:38.789173   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Creating domain...
	I0719 15:47:35.191148   58417 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 15:47:35.191193   58417 cache_images.go:123] Successfully loaded all cached images
	I0719 15:47:35.191198   58417 cache_images.go:92] duration metric: took 14.376640053s to LoadCachedImages
	I0719 15:47:35.191209   58417 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0-beta.0 crio true true} ...
	I0719 15:47:35.191329   58417 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-382231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:35.191424   58417 ssh_runner.go:195] Run: crio config
	I0719 15:47:35.236248   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:35.236276   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:35.236288   58417 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:35.236309   58417 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-382231 NodeName:no-preload-382231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:47:35.236464   58417 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-382231"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:35.236525   58417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 15:47:35.247524   58417 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:35.247611   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:35.257583   58417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0719 15:47:35.275057   58417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 15:47:35.291468   58417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0719 15:47:35.308021   58417 ssh_runner.go:195] Run: grep 192.168.39.227	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:35.312121   58417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:35.324449   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:35.451149   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:35.477844   58417 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231 for IP: 192.168.39.227
	I0719 15:47:35.477868   58417 certs.go:194] generating shared ca certs ...
	I0719 15:47:35.477887   58417 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:35.478043   58417 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:35.478093   58417 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:35.478103   58417 certs.go:256] generating profile certs ...
	I0719 15:47:35.478174   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.key
	I0719 15:47:35.478301   58417 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key.46f9a235
	I0719 15:47:35.478339   58417 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key
	I0719 15:47:35.478482   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:35.478520   58417 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:35.478530   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:35.478549   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:35.478569   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:35.478591   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:35.478628   58417 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:35.479291   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:35.523106   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:35.546934   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:35.585616   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:35.617030   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 15:47:35.641486   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 15:47:35.680051   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:35.703679   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:47:35.728088   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:35.751219   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:35.774149   58417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:35.796985   58417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:35.813795   58417 ssh_runner.go:195] Run: openssl version
	I0719 15:47:35.819568   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:35.830350   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834792   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.834847   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:35.840531   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:35.851584   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:35.862655   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867139   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.867199   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:35.872916   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:35.883986   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:35.894795   58417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899001   58417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.899049   58417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:35.904496   58417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:35.915180   58417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:35.919395   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:35.926075   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:35.931870   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:35.938089   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:35.944079   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:35.950449   58417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:35.956291   58417 kubeadm.go:392] StartCluster: {Name:no-preload-382231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-382231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:35.956396   58417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:35.956452   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:35.993976   58417 cri.go:89] found id: ""
	I0719 15:47:35.994047   58417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:36.004507   58417 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:36.004532   58417 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:36.004579   58417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:36.014644   58417 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:36.015628   58417 kubeconfig.go:125] found "no-preload-382231" server: "https://192.168.39.227:8443"
	I0719 15:47:36.017618   58417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:36.027252   58417 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.227
	I0719 15:47:36.027281   58417 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:36.027292   58417 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:36.027350   58417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:36.066863   58417 cri.go:89] found id: ""
	I0719 15:47:36.066934   58417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:36.082971   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:36.092782   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:36.092802   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:36.092841   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:36.101945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:36.101998   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:36.111368   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:36.120402   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:36.120447   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:36.130124   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.138945   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:36.138990   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:36.148176   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:36.157008   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:36.157060   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:36.166273   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:36.176032   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:36.291855   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.285472   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.476541   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.547807   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:37.652551   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:37.652649   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.153088   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.653690   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:38.718826   58417 api_server.go:72] duration metric: took 1.066275053s to wait for apiserver process to appear ...
	I0719 15:47:38.718858   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:47:38.718891   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:38.503709   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:38.503737   58817 machine.go:97] duration metric: took 915.527957ms to provisionDockerMachine
	I0719 15:47:38.503750   58817 start.go:293] postStartSetup for "old-k8s-version-862924" (driver="kvm2")
	I0719 15:47:38.503762   58817 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:38.503783   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.504151   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:38.504180   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.507475   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.507843   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.507877   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.508083   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.508314   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.508465   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.508583   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.593985   58817 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:38.598265   58817 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:38.598287   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:38.598352   58817 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:38.598446   58817 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:38.598533   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:38.609186   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:38.644767   58817 start.go:296] duration metric: took 141.002746ms for postStartSetup
	I0719 15:47:38.644808   58817 fix.go:56] duration metric: took 19.365976542s for fixHost
	I0719 15:47:38.644836   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.648171   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648545   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.648576   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.648777   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.649009   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649185   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.649360   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.649513   58817 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:38.649779   58817 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.102 22 <nil> <nil>}
	I0719 15:47:38.649795   58817 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:38.758955   58817 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404058.716653194
	
	I0719 15:47:38.758978   58817 fix.go:216] guest clock: 1721404058.716653194
	I0719 15:47:38.758987   58817 fix.go:229] Guest: 2024-07-19 15:47:38.716653194 +0000 UTC Remote: 2024-07-19 15:47:38.644812576 +0000 UTC m=+255.418683135 (delta=71.840618ms)
	I0719 15:47:38.759010   58817 fix.go:200] guest clock delta is within tolerance: 71.840618ms
	I0719 15:47:38.759017   58817 start.go:83] releasing machines lock for "old-k8s-version-862924", held for 19.4802155s
	I0719 15:47:38.759056   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.759308   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:38.761901   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762334   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.762368   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.762525   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763030   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763198   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .DriverName
	I0719 15:47:38.763296   58817 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:38.763343   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.763489   58817 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:38.763522   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHHostname
	I0719 15:47:38.766613   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.766771   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767028   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767050   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767200   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:38.767219   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:38.767298   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767377   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHPort
	I0719 15:47:38.767453   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767577   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHKeyPath
	I0719 15:47:38.767637   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767723   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetSSHUsername
	I0719 15:47:38.767768   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.767845   58817 sshutil.go:53] new ssh client: &{IP:192.168.50.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/old-k8s-version-862924/id_rsa Username:docker}
	I0719 15:47:38.874680   58817 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:38.882155   58817 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:47:39.030824   58817 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:47:39.038357   58817 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:47:39.038458   58817 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:47:39.059981   58817 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:47:39.060015   58817 start.go:495] detecting cgroup driver to use...
	I0719 15:47:39.060081   58817 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:47:39.082631   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:47:39.101570   58817 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:47:39.101628   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:47:39.120103   58817 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:47:39.139636   58817 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:47:39.259574   58817 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:47:39.441096   58817 docker.go:233] disabling docker service ...
	I0719 15:47:39.441162   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:47:39.460197   58817 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:47:39.476884   58817 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:47:39.639473   58817 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:47:39.773468   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:47:39.790968   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:47:39.811330   58817 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 15:47:39.811407   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.823965   58817 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:47:39.824057   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.835454   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.846201   58817 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:47:39.856951   58817 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:47:39.869495   58817 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:47:39.880850   58817 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:47:39.880914   58817 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:47:39.900465   58817 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:47:39.911488   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:40.032501   58817 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:47:40.194606   58817 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:47:40.194676   58817 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:47:40.199572   58817 start.go:563] Will wait 60s for crictl version
	I0719 15:47:40.199683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:40.203747   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:47:40.246479   58817 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:47:40.246594   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.275992   58817 ssh_runner.go:195] Run: crio --version
	I0719 15:47:40.313199   58817 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 15:47:40.314363   58817 main.go:141] libmachine: (old-k8s-version-862924) Calling .GetIP
	I0719 15:47:40.317688   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318081   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:d7:f3", ip: ""} in network mk-old-k8s-version-862924: {Iface:virbr4 ExpiryTime:2024-07-19 16:47:29 +0000 UTC Type:0 Mac:52:54:00:36:d7:f3 Iaid: IPaddr:192.168.50.102 Prefix:24 Hostname:old-k8s-version-862924 Clientid:01:52:54:00:36:d7:f3}
	I0719 15:47:40.318106   58817 main.go:141] libmachine: (old-k8s-version-862924) DBG | domain old-k8s-version-862924 has defined IP address 192.168.50.102 and MAC address 52:54:00:36:d7:f3 in network mk-old-k8s-version-862924
	I0719 15:47:40.318333   58817 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 15:47:40.323006   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:40.336488   58817 kubeadm.go:883] updating cluster {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:47:40.336626   58817 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 15:47:40.336672   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:40.394863   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:40.394934   58817 ssh_runner.go:195] Run: which lz4
	I0719 15:47:40.399546   58817 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:47:40.404163   58817 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:47:40.404197   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 15:47:42.191817   58817 crio.go:462] duration metric: took 1.792317426s to copy over tarball
	I0719 15:47:42.191882   58817 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:41.984204   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:41.984237   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:41.984255   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.031024   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:47:42.031055   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:47:42.219815   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.256851   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.256888   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:42.719015   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:42.756668   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:42.756705   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.219173   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.255610   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:47:43.255645   58417 api_server.go:103] status: https://192.168.39.227:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:47:43.719116   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:47:43.725453   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:47:43.739070   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:47:43.739108   58417 api_server.go:131] duration metric: took 5.020238689s to wait for apiserver health ...
	I0719 15:47:43.739119   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:47:43.739128   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:43.741458   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:47:40.069048   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting to get IP...
	I0719 15:47:40.069866   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070409   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.070480   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.070379   59996 retry.go:31] will retry after 299.168281ms: waiting for machine to come up
	I0719 15:47:40.370939   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371381   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.371411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.371340   59996 retry.go:31] will retry after 388.345842ms: waiting for machine to come up
	I0719 15:47:40.761301   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762861   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:40.762889   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:40.762797   59996 retry.go:31] will retry after 305.39596ms: waiting for machine to come up
	I0719 15:47:41.070215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070791   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.070823   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.070746   59996 retry.go:31] will retry after 452.50233ms: waiting for machine to come up
	I0719 15:47:41.525465   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.525997   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:41.526019   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:41.525920   59996 retry.go:31] will retry after 686.050268ms: waiting for machine to come up
	I0719 15:47:42.214012   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214513   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:42.214545   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:42.214465   59996 retry.go:31] will retry after 867.815689ms: waiting for machine to come up
	I0719 15:47:43.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084240   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:43.084262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:43.084198   59996 retry.go:31] will retry after 1.006018507s: waiting for machine to come up
	I0719 15:47:44.092571   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:44.093050   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:44.092992   59996 retry.go:31] will retry after 961.604699ms: waiting for machine to come up
	I0719 15:47:43.743125   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:47:43.780558   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:47:43.825123   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:47:43.849564   58417 system_pods.go:59] 8 kube-system pods found
	I0719 15:47:43.849608   58417 system_pods.go:61] "coredns-5cfdc65f69-9p4dr" [b6744bc9-b683-4f7e-b506-a95eb58ac308] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:47:43.849620   58417 system_pods.go:61] "etcd-no-preload-382231" [1f2704ae-84a0-4636-9826-f6bb5d2cb8b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:47:43.849632   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [e4ae90fb-9024-4420-9249-6f936ff43894] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:47:43.849643   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [ceb3538d-a6b9-4135-b044-b139003baf35] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:47:43.849650   58417 system_pods.go:61] "kube-proxy-z2z9r" [fdc0eb8f-2884-436b-ba1e-4c71107f756c] Running
	I0719 15:47:43.849657   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [5ae3221b-7186-4dbe-9b1b-fb4c8c239c62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:47:43.849677   58417 system_pods.go:61] "metrics-server-78fcd8795b-zwr8g" [4d4de9aa-89f2-4cf4-85c2-26df25bd82c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:47:43.849687   58417 system_pods.go:61] "storage-provisioner" [ab5ce17f-a0da-4ab7-803e-245ba4363d09] Running
	I0719 15:47:43.849696   58417 system_pods.go:74] duration metric: took 24.54438ms to wait for pod list to return data ...
	I0719 15:47:43.849709   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:47:43.864512   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:47:43.864636   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:47:43.864684   58417 node_conditions.go:105] duration metric: took 14.967708ms to run NodePressure ...
	I0719 15:47:43.864727   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:44.524399   58417 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531924   58417 kubeadm.go:739] kubelet initialised
	I0719 15:47:44.531944   58417 kubeadm.go:740] duration metric: took 7.516197ms waiting for restarted kubelet to initialise ...
	I0719 15:47:44.531952   58417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:47:44.538016   58417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:45.377244   58817 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.18533335s)
	I0719 15:47:45.377275   58817 crio.go:469] duration metric: took 3.185430213s to extract the tarball
	I0719 15:47:45.377282   58817 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:47:45.422160   58817 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:47:45.463351   58817 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 15:47:45.463377   58817 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 15:47:45.463437   58817 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.463445   58817 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.463484   58817 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.463496   58817 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.463616   58817 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.463452   58817 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.463470   58817 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465250   58817 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 15:47:45.465259   58817 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.465270   58817 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:45.465280   58817 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.465252   58817 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.465254   58817 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.465322   58817 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.465358   58817 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.652138   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 15:47:45.694548   58817 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 15:47:45.694600   58817 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 15:47:45.694655   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.698969   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 15:47:45.721986   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.747138   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 15:47:45.779449   58817 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 15:47:45.779485   58817 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.779526   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.783597   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 15:47:45.822950   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.825025   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 15:47:45.830471   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.835797   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.837995   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.840998   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.907741   58817 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 15:47:45.907793   58817 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.907845   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.928805   58817 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 15:47:45.928844   58817 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.928918   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.948467   58817 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 15:47:45.948522   58817 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.948571   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.966584   58817 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 15:47:45.966629   58817 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:45.966683   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975276   58817 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 15:47:45.975316   58817 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:45.975339   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 15:47:45.975355   58817 ssh_runner.go:195] Run: which crictl
	I0719 15:47:45.975378   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 15:47:45.975424   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 15:47:45.975449   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 15:47:46.069073   58817 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 15:47:46.069100   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 15:47:46.079020   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 15:47:46.080816   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 15:47:46.080818   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 15:47:46.111983   58817 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 15:47:46.308204   58817 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:47:46.465651   58817 cache_images.go:92] duration metric: took 1.002255395s to LoadCachedImages
	W0719 15:47:46.465740   58817 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-3847/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 15:47:46.465753   58817 kubeadm.go:934] updating node { 192.168.50.102 8443 v1.20.0 crio true true} ...
	I0719 15:47:46.465899   58817 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-862924 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:47:46.465973   58817 ssh_runner.go:195] Run: crio config
	I0719 15:47:46.524125   58817 cni.go:84] Creating CNI manager for ""
	I0719 15:47:46.524152   58817 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:47:46.524167   58817 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:47:46.524190   58817 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.102 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-862924 NodeName:old-k8s-version-862924 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.102"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.102 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 15:47:46.524322   58817 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.102
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-862924"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.102
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.102"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:47:46.524476   58817 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 15:47:46.534654   58817 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:47:46.534726   58817 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:47:46.544888   58817 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 15:47:46.565864   58817 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:47:46.584204   58817 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 15:47:46.603470   58817 ssh_runner.go:195] Run: grep 192.168.50.102	control-plane.minikube.internal$ /etc/hosts
	I0719 15:47:46.607776   58817 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.102	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:47:46.624713   58817 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:47:46.752753   58817 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:47:46.776115   58817 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924 for IP: 192.168.50.102
	I0719 15:47:46.776151   58817 certs.go:194] generating shared ca certs ...
	I0719 15:47:46.776182   58817 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:46.776376   58817 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:47:46.776431   58817 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:47:46.776443   58817 certs.go:256] generating profile certs ...
	I0719 15:47:46.776559   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.key
	I0719 15:47:46.776622   58817 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key.4659f1b2
	I0719 15:47:46.776673   58817 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key
	I0719 15:47:46.776811   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:47:46.776860   58817 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:47:46.776880   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:47:46.776922   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:47:46.776961   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:47:46.776991   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:47:46.777051   58817 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:46.777929   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:47:46.815207   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:47:46.863189   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:47:46.894161   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:47:46.932391   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 15:47:46.981696   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:47:47.016950   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:47:47.043597   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 15:47:47.067408   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:47:47.092082   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:47:47.116639   58817 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:47:47.142425   58817 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:47:47.161443   58817 ssh_runner.go:195] Run: openssl version
	I0719 15:47:47.167678   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:47:47.180194   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185276   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.185330   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:47:47.191437   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:47:47.203471   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:47:47.215645   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220392   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.220444   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:47:47.226332   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:47:47.238559   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:47:47.251382   58817 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256213   58817 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.256268   58817 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:47:47.262261   58817 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:47:47.275192   58817 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:47:47.280176   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:47:47.288308   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:47:47.295013   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:47:47.301552   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:47:47.307628   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:47:47.313505   58817 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:47:47.319956   58817 kubeadm.go:392] StartCluster: {Name:old-k8s-version-862924 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-862924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.102 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:47:47.320042   58817 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:47:47.320097   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.359706   58817 cri.go:89] found id: ""
	I0719 15:47:47.359789   58817 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:47:47.373816   58817 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:47:47.373839   58817 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:47:47.373907   58817 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:47:47.386334   58817 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:47:47.387432   58817 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-862924" does not appear in /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:47:47.388146   58817 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-3847/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-862924" cluster setting kubeconfig missing "old-k8s-version-862924" context setting]
	I0719 15:47:47.389641   58817 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:47:47.393000   58817 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:47:47.404737   58817 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.102
	I0719 15:47:47.404770   58817 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:47:47.404782   58817 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:47:47.404847   58817 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:47:47.448460   58817 cri.go:89] found id: ""
	I0719 15:47:47.448529   58817 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:47:47.466897   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:47:47.479093   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:47:47.479136   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:47:47.479201   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:47:47.490338   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:47:47.490425   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:47:47.502079   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:47:47.514653   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:47:47.514722   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:47:47.526533   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.536043   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:47:47.536109   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:47:47.545691   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:47:47.555221   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:47:47.555295   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:47:47.564645   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:47:47.574094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:47.740041   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:45.055856   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056318   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:45.056347   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:45.056263   59996 retry.go:31] will retry after 1.300059023s: waiting for machine to come up
	I0719 15:47:46.357875   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358379   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:46.358407   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:46.358331   59996 retry.go:31] will retry after 2.269558328s: waiting for machine to come up
	I0719 15:47:48.630965   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631641   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:48.631674   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:48.631546   59996 retry.go:31] will retry after 2.829487546s: waiting for machine to come up
	I0719 15:47:47.449778   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:48.045481   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:48.045508   58417 pod_ready.go:81] duration metric: took 3.507466621s for pod "coredns-5cfdc65f69-9p4dr" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.045521   58417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:48.272472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.545776   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.692516   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:47:48.799640   58817 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:47:48.799721   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.299983   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:49.800470   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.300833   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:50.800741   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.300351   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.800185   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:52.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:51.463569   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464003   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:51.464021   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:51.463968   59996 retry.go:31] will retry after 2.917804786s: waiting for machine to come up
	I0719 15:47:54.383261   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383967   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | unable to find current IP address of domain default-k8s-diff-port-601445 in network mk-default-k8s-diff-port-601445
	I0719 15:47:54.383993   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | I0719 15:47:54.383924   59996 retry.go:31] will retry after 4.044917947s: waiting for machine to come up
	I0719 15:47:50.052168   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:51.052114   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:51.052135   58417 pod_ready.go:81] duration metric: took 3.006607122s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:51.052144   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059540   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:52.059563   58417 pod_ready.go:81] duration metric: took 1.007411773s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:52.059576   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.066338   58417 pod_ready.go:102] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:54.567056   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.567076   58417 pod_ready.go:81] duration metric: took 2.507493559s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.567085   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571655   58417 pod_ready.go:92] pod "kube-proxy-z2z9r" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.571672   58417 pod_ready.go:81] duration metric: took 4.581191ms for pod "kube-proxy-z2z9r" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.571680   58417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.575983   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:47:54.576005   58417 pod_ready.go:81] duration metric: took 4.315788ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:54.576017   58417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	I0719 15:47:53.300353   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:53.800804   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:54.800691   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:55.800502   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.300314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:56.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.300773   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:57.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.432420   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432945   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Found IP for machine: 192.168.61.144
	I0719 15:47:58.432976   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has current primary IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.432988   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserving static IP address...
	I0719 15:47:58.433361   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.433395   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | skip adding static IP to network mk-default-k8s-diff-port-601445 - found existing host DHCP lease matching {name: "default-k8s-diff-port-601445", mac: "52:54:00:97:8a:83", ip: "192.168.61.144"}
	I0719 15:47:58.433412   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Reserved static IP address: 192.168.61.144
	I0719 15:47:58.433430   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Waiting for SSH to be available...
	I0719 15:47:58.433442   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Getting to WaitForSSH function...
	I0719 15:47:58.435448   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435770   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.435807   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.435868   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH client type: external
	I0719 15:47:58.435930   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa (-rw-------)
	I0719 15:47:58.435973   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:47:58.435992   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | About to run SSH command:
	I0719 15:47:58.436002   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | exit 0
	I0719 15:47:58.562187   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | SSH cmd err, output: <nil>: 
	I0719 15:47:58.562564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetConfigRaw
	I0719 15:47:58.563233   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.565694   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566042   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.566066   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.566301   59208 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/config.json ...
	I0719 15:47:58.566469   59208 machine.go:94] provisionDockerMachine start ...
	I0719 15:47:58.566489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:58.566684   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.569109   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569485   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.569512   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.569594   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.569763   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.569912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.570022   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.570167   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.570398   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.570412   59208 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:47:58.675164   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:47:58.675217   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675455   59208 buildroot.go:166] provisioning hostname "default-k8s-diff-port-601445"
	I0719 15:47:58.675487   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.675664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.678103   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678522   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.678564   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.678721   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.678908   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679074   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.679198   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.679345   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.679516   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.679531   59208 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-601445 && echo "default-k8s-diff-port-601445" | sudo tee /etc/hostname
	I0719 15:47:58.802305   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-601445
	
	I0719 15:47:58.802336   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.805215   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805582   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.805613   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.805796   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:58.805981   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806139   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:58.806322   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:58.806517   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:58.806689   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:58.806706   59208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-601445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-601445/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-601445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:47:58.919959   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:47:58.919985   59208 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:47:58.920019   59208 buildroot.go:174] setting up certificates
	I0719 15:47:58.920031   59208 provision.go:84] configureAuth start
	I0719 15:47:58.920041   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetMachineName
	I0719 15:47:58.920283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:58.922837   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923193   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.923225   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.923413   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:58.925832   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926128   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:58.926156   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:58.926297   59208 provision.go:143] copyHostCerts
	I0719 15:47:58.926360   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:47:58.926374   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:47:58.926425   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:47:58.926512   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:47:58.926520   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:47:58.926543   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:47:58.926600   59208 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:47:58.926609   59208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:47:58.926630   59208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:47:58.926682   59208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-601445 san=[127.0.0.1 192.168.61.144 default-k8s-diff-port-601445 localhost minikube]
	I0719 15:47:59.080911   59208 provision.go:177] copyRemoteCerts
	I0719 15:47:59.080966   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:47:59.080990   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.083723   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084029   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.084059   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.084219   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.084411   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.084531   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.084674   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.172754   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:47:59.198872   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0719 15:47:59.222898   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 15:47:59.246017   59208 provision.go:87] duration metric: took 325.975105ms to configureAuth
	I0719 15:47:59.246037   59208 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:47:59.246215   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:47:59.246312   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.248757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249079   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.249111   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.249354   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.249526   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249679   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.249779   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.249924   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.250142   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.250161   59208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:47:59.743101   58376 start.go:364] duration metric: took 52.710718223s to acquireMachinesLock for "embed-certs-817144"
	I0719 15:47:59.743169   58376 start.go:96] Skipping create...Using existing machine configuration
	I0719 15:47:59.743177   58376 fix.go:54] fixHost starting: 
	I0719 15:47:59.743553   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:47:59.743591   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:47:59.760837   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34547
	I0719 15:47:59.761216   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:47:59.761734   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:47:59.761754   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:47:59.762080   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:47:59.762291   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:47:59.762504   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:47:59.764044   58376 fix.go:112] recreateIfNeeded on embed-certs-817144: state=Stopped err=<nil>
	I0719 15:47:59.764067   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	W0719 15:47:59.764217   58376 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 15:47:59.766063   58376 out.go:177] * Restarting existing kvm2 VM for "embed-certs-817144" ...
	I0719 15:47:56.582753   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:58.583049   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:47:59.508289   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:47:59.508327   59208 machine.go:97] duration metric: took 941.842272ms to provisionDockerMachine
	I0719 15:47:59.508343   59208 start.go:293] postStartSetup for "default-k8s-diff-port-601445" (driver="kvm2")
	I0719 15:47:59.508359   59208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:47:59.508383   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.508687   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:47:59.508720   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.511449   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.511887   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.511911   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.512095   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.512275   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.512437   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.512580   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.596683   59208 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:47:59.600761   59208 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:47:59.600782   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:47:59.600841   59208 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:47:59.600911   59208 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:47:59.600996   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:47:59.609867   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:47:59.633767   59208 start.go:296] duration metric: took 125.408568ms for postStartSetup
	I0719 15:47:59.633803   59208 fix.go:56] duration metric: took 20.874627736s for fixHost
	I0719 15:47:59.633825   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.636600   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.636944   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.636977   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.637121   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.637328   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637495   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.637640   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.637811   59208 main.go:141] libmachine: Using SSH client type: native
	I0719 15:47:59.637989   59208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.144 22 <nil> <nil>}
	I0719 15:47:59.637999   59208 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:47:59.742929   59208 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404079.728807147
	
	I0719 15:47:59.742957   59208 fix.go:216] guest clock: 1721404079.728807147
	I0719 15:47:59.742967   59208 fix.go:229] Guest: 2024-07-19 15:47:59.728807147 +0000 UTC Remote: 2024-07-19 15:47:59.633807395 +0000 UTC m=+200.280673126 (delta=94.999752ms)
	I0719 15:47:59.743008   59208 fix.go:200] guest clock delta is within tolerance: 94.999752ms
	I0719 15:47:59.743013   59208 start.go:83] releasing machines lock for "default-k8s-diff-port-601445", held for 20.983876369s
	I0719 15:47:59.743040   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.743262   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:47:59.746145   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746501   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.746534   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.746662   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747297   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747461   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:47:59.747553   59208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:47:59.747603   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.747714   59208 ssh_runner.go:195] Run: cat /version.json
	I0719 15:47:59.747738   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:47:59.750268   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750583   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750664   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750751   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.750916   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:47:59.750932   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.750942   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:47:59.751127   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751170   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:47:59.751269   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751353   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:47:59.751421   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.751489   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:47:59.751646   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:47:59.834888   59208 ssh_runner.go:195] Run: systemctl --version
	I0719 15:47:59.859285   59208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:00.009771   59208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:00.015906   59208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:00.015973   59208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:00.032129   59208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:00.032150   59208 start.go:495] detecting cgroup driver to use...
	I0719 15:48:00.032215   59208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:00.050052   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:00.063282   59208 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:00.063341   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:00.078073   59208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:00.092872   59208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:00.217105   59208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:00.364335   59208 docker.go:233] disabling docker service ...
	I0719 15:48:00.364403   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:00.384138   59208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:00.400280   59208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:00.543779   59208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:00.671512   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:00.687337   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:00.708629   59208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:00.708690   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.720508   59208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:00.720580   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.732952   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.743984   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.756129   59208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:00.766873   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.777481   59208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.799865   59208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:00.812450   59208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:00.822900   59208 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:00.822964   59208 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:00.836117   59208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:00.845958   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:00.959002   59208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:01.104519   59208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:01.104598   59208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:01.110652   59208 start.go:563] Will wait 60s for crictl version
	I0719 15:48:01.110711   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:48:01.114358   59208 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:01.156969   59208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:01.157063   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.187963   59208 ssh_runner.go:195] Run: crio --version
	I0719 15:48:01.219925   59208 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:47:58.299763   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:58.800069   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.299998   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:47:59.800005   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.300717   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:00.800601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.300433   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.800788   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.300324   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:02.800142   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:01.221101   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetIP
	I0719 15:48:01.224369   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224757   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:01.224789   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:01.224989   59208 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:01.229813   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:01.243714   59208 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:01.243843   59208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:01.243886   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:01.283013   59208 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:01.283093   59208 ssh_runner.go:195] Run: which lz4
	I0719 15:48:01.287587   59208 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:01.291937   59208 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:01.291965   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:02.810751   59208 crio.go:462] duration metric: took 1.52319928s to copy over tarball
	I0719 15:48:02.810846   59208 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:47:59.767270   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Start
	I0719 15:47:59.767433   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring networks are active...
	I0719 15:47:59.768056   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network default is active
	I0719 15:47:59.768371   58376 main.go:141] libmachine: (embed-certs-817144) Ensuring network mk-embed-certs-817144 is active
	I0719 15:47:59.768804   58376 main.go:141] libmachine: (embed-certs-817144) Getting domain xml...
	I0719 15:47:59.769396   58376 main.go:141] libmachine: (embed-certs-817144) Creating domain...
	I0719 15:48:01.024457   58376 main.go:141] libmachine: (embed-certs-817144) Waiting to get IP...
	I0719 15:48:01.025252   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.025697   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.025741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.025660   60153 retry.go:31] will retry after 211.260956ms: waiting for machine to come up
	I0719 15:48:01.238027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.238561   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.238588   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.238529   60153 retry.go:31] will retry after 346.855203ms: waiting for machine to come up
	I0719 15:48:01.587201   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.587773   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.587815   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.587736   60153 retry.go:31] will retry after 327.69901ms: waiting for machine to come up
	I0719 15:48:01.917433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:01.917899   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:01.917931   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:01.917864   60153 retry.go:31] will retry after 474.430535ms: waiting for machine to come up
	I0719 15:48:02.393610   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.394139   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.394168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.394061   60153 retry.go:31] will retry after 491.247455ms: waiting for machine to come up
	I0719 15:48:02.886826   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:02.887296   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:02.887329   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:02.887249   60153 retry.go:31] will retry after 661.619586ms: waiting for machine to come up
	I0719 15:48:03.550633   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:03.551175   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:03.551199   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:03.551126   60153 retry.go:31] will retry after 1.10096194s: waiting for machine to come up
	I0719 15:48:00.583866   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:02.585144   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:03.300240   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:03.799829   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.299793   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:04.800609   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.300595   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.799844   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.300230   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:06.800150   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.299923   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:07.800063   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:05.112520   59208 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301644218s)
	I0719 15:48:05.112555   59208 crio.go:469] duration metric: took 2.301774418s to extract the tarball
	I0719 15:48:05.112565   59208 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:05.151199   59208 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:05.193673   59208 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:05.193701   59208 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:05.193712   59208 kubeadm.go:934] updating node { 192.168.61.144 8444 v1.30.3 crio true true} ...
	I0719 15:48:05.193836   59208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-601445 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:05.193919   59208 ssh_runner.go:195] Run: crio config
	I0719 15:48:05.239103   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:05.239131   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:05.239146   59208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:05.239176   59208 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.144 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-601445 NodeName:default-k8s-diff-port-601445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:05.239374   59208 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.144
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-601445"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:05.239441   59208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:05.249729   59208 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:05.249799   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:05.259540   59208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0719 15:48:05.277388   59208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:05.294497   59208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0719 15:48:05.313990   59208 ssh_runner.go:195] Run: grep 192.168.61.144	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:05.318959   59208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:05.332278   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:05.463771   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:05.480474   59208 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445 for IP: 192.168.61.144
	I0719 15:48:05.480499   59208 certs.go:194] generating shared ca certs ...
	I0719 15:48:05.480520   59208 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:05.480674   59208 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:05.480732   59208 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:05.480746   59208 certs.go:256] generating profile certs ...
	I0719 15:48:05.480859   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.key
	I0719 15:48:05.480937   59208 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key.e31ea710
	I0719 15:48:05.480992   59208 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key
	I0719 15:48:05.481128   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:05.481165   59208 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:05.481180   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:05.481210   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:05.481245   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:05.481276   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:05.481334   59208 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:05.481940   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:05.524604   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:05.562766   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:05.618041   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:05.660224   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 15:48:05.689232   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:05.713890   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:05.738923   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:05.764447   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:05.793905   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:05.823630   59208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:05.849454   59208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:05.868309   59208 ssh_runner.go:195] Run: openssl version
	I0719 15:48:05.874423   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:05.887310   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.891994   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.892057   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:05.898173   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:05.911541   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:05.922829   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927537   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.927600   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:05.933642   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:05.946269   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:05.958798   59208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963899   59208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.963959   59208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:05.969801   59208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:05.980966   59208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:05.985487   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:05.991303   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:05.997143   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:06.003222   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:06.008984   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:06.014939   59208 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:06.020976   59208 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-601445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-601445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:06.021059   59208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:06.021106   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.066439   59208 cri.go:89] found id: ""
	I0719 15:48:06.066503   59208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:06.080640   59208 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:06.080663   59208 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:06.080730   59208 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:06.093477   59208 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:06.094740   59208 kubeconfig.go:125] found "default-k8s-diff-port-601445" server: "https://192.168.61.144:8444"
	I0719 15:48:06.096907   59208 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:06.107974   59208 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.144
	I0719 15:48:06.108021   59208 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:06.108035   59208 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:06.108109   59208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:06.156149   59208 cri.go:89] found id: ""
	I0719 15:48:06.156222   59208 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:06.172431   59208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:06.182482   59208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:06.182511   59208 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:06.182562   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0719 15:48:06.192288   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:06.192361   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:06.202613   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0719 15:48:06.212553   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:06.212624   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:06.223086   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.233949   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:06.234007   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:06.247224   59208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0719 15:48:06.257851   59208 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:06.257908   59208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:06.268650   59208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:06.279549   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:06.421964   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.407768   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.614213   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.686560   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:07.769476   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:07.769590   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.270472   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.770366   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.795057   59208 api_server.go:72] duration metric: took 1.025580277s to wait for apiserver process to appear ...
	I0719 15:48:08.795086   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:08.795112   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:08.795617   59208 api_server.go:269] stopped: https://192.168.61.144:8444/healthz: Get "https://192.168.61.144:8444/healthz": dial tcp 192.168.61.144:8444: connect: connection refused
	I0719 15:48:09.295459   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:04.653309   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:04.653784   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:04.653846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:04.653753   60153 retry.go:31] will retry after 1.276153596s: waiting for machine to come up
	I0719 15:48:05.931365   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:05.931820   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:05.931848   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:05.931798   60153 retry.go:31] will retry after 1.372328403s: waiting for machine to come up
	I0719 15:48:07.305390   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:07.305892   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:07.305922   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:07.305850   60153 retry.go:31] will retry after 1.738311105s: waiting for machine to come up
	I0719 15:48:09.046095   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:09.046526   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:09.046558   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:09.046481   60153 retry.go:31] will retry after 2.169449629s: waiting for machine to come up
	I0719 15:48:05.084157   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:07.583246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:09.584584   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:11.457584   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.457651   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.457670   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.490130   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:11.490165   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:11.795439   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:11.803724   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:11.803757   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.295287   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.300002   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:12.300034   59208 api_server.go:103] status: https://192.168.61.144:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:12.795285   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:48:12.800067   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:48:12.808020   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:12.808045   59208 api_server.go:131] duration metric: took 4.012952016s to wait for apiserver health ...
	I0719 15:48:12.808055   59208 cni.go:84] Creating CNI manager for ""
	I0719 15:48:12.808064   59208 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:12.810134   59208 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:08.300278   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:08.799805   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.299882   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:09.800690   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.300543   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:10.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.300260   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:11.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.299850   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.800160   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:12.812011   59208 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:12.824520   59208 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:12.846711   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:12.855286   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:12.855315   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:12.855322   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:12.855329   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:12.855335   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:12.855345   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:12.855353   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:12.855360   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:12.855369   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:12.855377   59208 system_pods.go:74] duration metric: took 8.645314ms to wait for pod list to return data ...
	I0719 15:48:12.855390   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:12.858531   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:12.858556   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:12.858566   59208 node_conditions.go:105] duration metric: took 3.171526ms to run NodePressure ...
	I0719 15:48:12.858581   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:13.176014   59208 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180575   59208 kubeadm.go:739] kubelet initialised
	I0719 15:48:13.180602   59208 kubeadm.go:740] duration metric: took 4.561708ms waiting for restarted kubelet to initialise ...
	I0719 15:48:13.180612   59208 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:13.187723   59208 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.204023   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204052   59208 pod_ready.go:81] duration metric: took 16.303152ms for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.204061   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.204070   59208 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.212768   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212790   59208 pod_ready.go:81] duration metric: took 8.709912ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.212800   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.212812   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.220452   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220474   59208 pod_ready.go:81] duration metric: took 7.650656ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.220482   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.220489   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.251973   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.251997   59208 pod_ready.go:81] duration metric: took 31.499608ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.252008   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.252029   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:13.650914   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650940   59208 pod_ready.go:81] duration metric: took 398.904724ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:13.650948   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-proxy-r7b2z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:13.650954   59208 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.050582   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050615   59208 pod_ready.go:81] duration metric: took 399.652069ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.050630   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.050642   59208 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:14.450349   59208 pod_ready.go:97] node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450379   59208 pod_ready.go:81] duration metric: took 399.72875ms for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:14.450391   59208 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-601445" hosting pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.450399   59208 pod_ready.go:38] duration metric: took 1.269776818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:14.450416   59208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:14.462296   59208 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:14.462318   59208 kubeadm.go:597] duration metric: took 8.38163922s to restartPrimaryControlPlane
	I0719 15:48:14.462329   59208 kubeadm.go:394] duration metric: took 8.441360513s to StartCluster
	I0719 15:48:14.462348   59208 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.462422   59208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:14.464082   59208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:14.464400   59208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.144 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:14.464459   59208 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:14.464531   59208 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464570   59208 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.464581   59208 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:14.464592   59208 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464610   59208 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-601445"
	I0719 15:48:14.464636   59208 config.go:182] Loaded profile config "default-k8s-diff-port-601445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:14.464670   59208 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:14.464672   59208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-601445"
	W0719 15:48:14.464684   59208 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:14.464613   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.464740   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.465050   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465111   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465151   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465178   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.465199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.465235   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.466230   59208 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:11.217150   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:11.217605   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:11.217634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:11.217561   60153 retry.go:31] will retry after 3.406637692s: waiting for machine to come up
	I0719 15:48:14.467899   59208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:14.481294   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0719 15:48:14.481538   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I0719 15:48:14.481541   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0719 15:48:14.481658   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.481909   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.482122   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482145   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482363   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482387   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482461   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.482478   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.482590   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482704   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482762   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.482853   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.483131   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483159   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.483199   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.483217   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.486437   59208 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-601445"
	W0719 15:48:14.486462   59208 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:14.486492   59208 host.go:66] Checking if "default-k8s-diff-port-601445" exists ...
	I0719 15:48:14.486893   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.486932   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.498388   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0719 15:48:14.498897   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0719 15:48:14.498952   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499251   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.499660   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499678   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.499838   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.499853   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.500068   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500168   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.500232   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.500410   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.501505   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41395
	I0719 15:48:14.501876   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.502391   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.502413   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.502456   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.502745   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.503006   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.503314   59208 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:14.503341   59208 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:14.505162   59208 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:14.505166   59208 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:12.084791   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.582986   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:14.506465   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:14.506487   59208 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:14.506506   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.506585   59208 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.506604   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:14.506628   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.510227   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511092   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511134   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511207   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511231   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511257   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511370   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511390   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.511570   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.511574   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.511662   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.511713   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.511787   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.511840   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.520612   59208 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0719 15:48:14.521013   59208 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:14.521451   59208 main.go:141] libmachine: Using API Version  1
	I0719 15:48:14.521470   59208 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:14.521817   59208 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:14.522016   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetState
	I0719 15:48:14.523622   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .DriverName
	I0719 15:48:14.523862   59208 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.523876   59208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:14.523895   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHHostname
	I0719 15:48:14.526426   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.526882   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:8a:83", ip: ""} in network mk-default-k8s-diff-port-601445: {Iface:virbr3 ExpiryTime:2024-07-19 16:47:50 +0000 UTC Type:0 Mac:52:54:00:97:8a:83 Iaid: IPaddr:192.168.61.144 Prefix:24 Hostname:default-k8s-diff-port-601445 Clientid:01:52:54:00:97:8a:83}
	I0719 15:48:14.526941   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | domain default-k8s-diff-port-601445 has defined IP address 192.168.61.144 and MAC address 52:54:00:97:8a:83 in network mk-default-k8s-diff-port-601445
	I0719 15:48:14.527060   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHPort
	I0719 15:48:14.527190   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHKeyPath
	I0719 15:48:14.527344   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .GetSSHUsername
	I0719 15:48:14.527439   59208 sshutil.go:53] new ssh client: &{IP:192.168.61.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/default-k8s-diff-port-601445/id_rsa Username:docker}
	I0719 15:48:14.674585   59208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:14.693700   59208 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:14.752990   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:14.856330   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:14.856350   59208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:14.884762   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:14.884784   59208 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:14.895548   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:14.915815   59208 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:14.915844   59208 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:14.979442   59208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:15.098490   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098517   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098869   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.098893   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.098902   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.098912   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.099141   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.099158   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.105078   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.105252   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.105506   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.105526   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.802868   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.802892   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803265   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803279   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.803285   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.803248   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.803517   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.803530   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.803577   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.905945   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.905972   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906244   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906266   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906266   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) DBG | Closing plugin on server side
	I0719 15:48:15.906275   59208 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:15.906283   59208 main.go:141] libmachine: (default-k8s-diff-port-601445) Calling .Close
	I0719 15:48:15.906484   59208 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:15.906496   59208 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:15.906511   59208 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-601445"
	I0719 15:48:15.908671   59208 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0719 15:48:13.299986   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:13.800036   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.300736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:14.799875   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.300297   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.800535   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.299951   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:16.800667   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.300251   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:17.800590   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:15.910057   59208 addons.go:510] duration metric: took 1.445597408s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0719 15:48:16.697266   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:18.698379   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:14.627319   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:14.627800   58376 main.go:141] libmachine: (embed-certs-817144) DBG | unable to find current IP address of domain embed-certs-817144 in network mk-embed-certs-817144
	I0719 15:48:14.627822   58376 main.go:141] libmachine: (embed-certs-817144) DBG | I0719 15:48:14.627767   60153 retry.go:31] will retry after 4.38444645s: waiting for machine to come up
	I0719 15:48:19.016073   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016711   58376 main.go:141] libmachine: (embed-certs-817144) Found IP for machine: 192.168.72.37
	I0719 15:48:19.016742   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has current primary IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.016749   58376 main.go:141] libmachine: (embed-certs-817144) Reserving static IP address...
	I0719 15:48:19.017180   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.017204   58376 main.go:141] libmachine: (embed-certs-817144) Reserved static IP address: 192.168.72.37
	I0719 15:48:19.017222   58376 main.go:141] libmachine: (embed-certs-817144) DBG | skip adding static IP to network mk-embed-certs-817144 - found existing host DHCP lease matching {name: "embed-certs-817144", mac: "52:54:00:7b:4e:e4", ip: "192.168.72.37"}
	I0719 15:48:19.017239   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Getting to WaitForSSH function...
	I0719 15:48:19.017254   58376 main.go:141] libmachine: (embed-certs-817144) Waiting for SSH to be available...
	I0719 15:48:19.019511   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.019867   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.019896   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.020064   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH client type: external
	I0719 15:48:19.020080   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa (-rw-------)
	I0719 15:48:19.020107   58376 main.go:141] libmachine: (embed-certs-817144) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 15:48:19.020115   58376 main.go:141] libmachine: (embed-certs-817144) DBG | About to run SSH command:
	I0719 15:48:19.020124   58376 main.go:141] libmachine: (embed-certs-817144) DBG | exit 0
	I0719 15:48:19.150328   58376 main.go:141] libmachine: (embed-certs-817144) DBG | SSH cmd err, output: <nil>: 
	I0719 15:48:19.150676   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetConfigRaw
	I0719 15:48:19.151317   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.154087   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154600   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.154634   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.154907   58376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/config.json ...
	I0719 15:48:19.155143   58376 machine.go:94] provisionDockerMachine start ...
	I0719 15:48:19.155168   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:19.155369   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.157741   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158027   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.158060   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.158175   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.158368   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158618   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.158769   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.158945   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.159144   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.159161   58376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 15:48:19.274836   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 15:48:19.274863   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275148   58376 buildroot.go:166] provisioning hostname "embed-certs-817144"
	I0719 15:48:19.275174   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.275373   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.278103   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278489   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.278518   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.278696   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.278892   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279111   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.279299   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.279577   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.279798   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.279815   58376 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-817144 && echo "embed-certs-817144" | sudo tee /etc/hostname
	I0719 15:48:19.413956   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-817144
	
	I0719 15:48:19.413988   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.416836   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417168   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.417196   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.417408   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:19.417599   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417777   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:19.417911   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:19.418083   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:19.418274   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:19.418290   58376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-817144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-817144/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-817144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:48:16.583538   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.083431   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:19.541400   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:48:19.541439   58376 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-3847/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-3847/.minikube}
	I0719 15:48:19.541464   58376 buildroot.go:174] setting up certificates
	I0719 15:48:19.541478   58376 provision.go:84] configureAuth start
	I0719 15:48:19.541495   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetMachineName
	I0719 15:48:19.541801   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:19.544209   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544579   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.544608   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.544766   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:19.547206   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:19.547570   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:19.547714   58376 provision.go:143] copyHostCerts
	I0719 15:48:19.547772   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem, removing ...
	I0719 15:48:19.547782   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem
	I0719 15:48:19.547827   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/cert.pem (1123 bytes)
	I0719 15:48:19.547939   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem, removing ...
	I0719 15:48:19.547949   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem
	I0719 15:48:19.547969   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/key.pem (1675 bytes)
	I0719 15:48:19.548024   58376 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem, removing ...
	I0719 15:48:19.548031   58376 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem
	I0719 15:48:19.548047   58376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-3847/.minikube/ca.pem (1082 bytes)
	I0719 15:48:19.548093   58376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem org=jenkins.embed-certs-817144 san=[127.0.0.1 192.168.72.37 embed-certs-817144 localhost minikube]
	I0719 15:48:20.024082   58376 provision.go:177] copyRemoteCerts
	I0719 15:48:20.024137   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:48:20.024157   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.026940   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027322   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.027358   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.027541   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.027819   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.028011   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.028165   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.117563   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:48:20.144428   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 15:48:20.171520   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:48:20.195188   58376 provision.go:87] duration metric: took 653.6924ms to configureAuth
	I0719 15:48:20.195215   58376 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:48:20.195432   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:48:20.195518   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.198648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.198970   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.199007   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.199126   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.199335   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199527   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.199687   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.199849   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.200046   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.200063   58376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 15:48:20.502753   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 15:48:20.502782   58376 machine.go:97] duration metric: took 1.347623735s to provisionDockerMachine
	I0719 15:48:20.502794   58376 start.go:293] postStartSetup for "embed-certs-817144" (driver="kvm2")
	I0719 15:48:20.502805   58376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:48:20.502821   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.503204   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:48:20.503248   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.506142   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506537   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.506563   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.506697   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.506938   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.507125   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.507258   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.593356   58376 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:48:20.597843   58376 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 15:48:20.597877   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/addons for local assets ...
	I0719 15:48:20.597948   58376 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-3847/.minikube/files for local assets ...
	I0719 15:48:20.598048   58376 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem -> 110122.pem in /etc/ssl/certs
	I0719 15:48:20.598164   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 15:48:20.607951   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:20.634860   58376 start.go:296] duration metric: took 132.043928ms for postStartSetup
	I0719 15:48:20.634900   58376 fix.go:56] duration metric: took 20.891722874s for fixHost
	I0719 15:48:20.634919   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.637846   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638181   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.638218   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.638439   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.638674   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.638884   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.639054   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.639256   58376 main.go:141] libmachine: Using SSH client type: native
	I0719 15:48:20.639432   58376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I0719 15:48:20.639444   58376 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:48:20.755076   58376 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721404100.730818472
	
	I0719 15:48:20.755107   58376 fix.go:216] guest clock: 1721404100.730818472
	I0719 15:48:20.755115   58376 fix.go:229] Guest: 2024-07-19 15:48:20.730818472 +0000 UTC Remote: 2024-07-19 15:48:20.634903926 +0000 UTC m=+356.193225446 (delta=95.914546ms)
	I0719 15:48:20.755134   58376 fix.go:200] guest clock delta is within tolerance: 95.914546ms
	I0719 15:48:20.755139   58376 start.go:83] releasing machines lock for "embed-certs-817144", held for 21.011996674s
	I0719 15:48:20.755171   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.755465   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:20.758255   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758621   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.758644   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.758861   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759348   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759545   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:20.759656   58376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:48:20.759720   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.759780   58376 ssh_runner.go:195] Run: cat /version.json
	I0719 15:48:20.759802   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:20.762704   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.762833   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763161   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763202   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763399   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763493   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:20.763545   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:20.763608   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763693   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:20.763772   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764001   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:20.763996   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.764156   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:20.764278   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:20.867430   58376 ssh_runner.go:195] Run: systemctl --version
	I0719 15:48:20.873463   58376 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 15:48:21.029369   58376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:48:21.035953   58376 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:48:21.036028   58376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:48:21.054352   58376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:48:21.054381   58376 start.go:495] detecting cgroup driver to use...
	I0719 15:48:21.054440   58376 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:48:21.071903   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:48:21.088624   58376 docker.go:217] disabling cri-docker service (if available) ...
	I0719 15:48:21.088688   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 15:48:21.104322   58376 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 15:48:21.120089   58376 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 15:48:21.242310   58376 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 15:48:21.422514   58376 docker.go:233] disabling docker service ...
	I0719 15:48:21.422589   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 15:48:21.439213   58376 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 15:48:21.454361   58376 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 15:48:21.577118   58376 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 15:48:21.704150   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 15:48:21.719160   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:48:21.738765   58376 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 15:48:21.738817   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.750720   58376 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 15:48:21.750798   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.763190   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.775630   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.787727   58376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:48:21.799520   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.812016   58376 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.830564   58376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 15:48:21.841770   58376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:48:21.851579   58376 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 15:48:21.851651   58376 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 15:48:21.864529   58376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:48:21.874301   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:21.994669   58376 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 15:48:22.131448   58376 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 15:48:22.131521   58376 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 15:48:22.137328   58376 start.go:563] Will wait 60s for crictl version
	I0719 15:48:22.137391   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:48:22.141409   58376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:48:22.182947   58376 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 15:48:22.183029   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.217804   58376 ssh_runner.go:195] Run: crio --version
	I0719 15:48:22.252450   58376 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 15:48:18.300557   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:18.800420   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.300696   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:19.799874   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.300803   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:20.800634   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.300760   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.799929   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.300267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:22.800463   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:21.197350   59208 node_ready.go:53] node "default-k8s-diff-port-601445" has status "Ready":"False"
	I0719 15:48:22.197536   59208 node_ready.go:49] node "default-k8s-diff-port-601445" has status "Ready":"True"
	I0719 15:48:22.197558   59208 node_ready.go:38] duration metric: took 7.503825721s for node "default-k8s-diff-port-601445" to be "Ready" ...
	I0719 15:48:22.197568   59208 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:22.203380   59208 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:24.211899   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:22.253862   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetIP
	I0719 15:48:22.256397   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256763   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:22.256791   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:22.256968   58376 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0719 15:48:22.261184   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:22.274804   58376 kubeadm.go:883] updating cluster {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 15:48:22.274936   58376 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 15:48:22.274994   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:22.317501   58376 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 15:48:22.317559   58376 ssh_runner.go:195] Run: which lz4
	I0719 15:48:22.321646   58376 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:48:22.326455   58376 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:48:22.326478   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 15:48:23.820083   58376 crio.go:462] duration metric: took 1.498469232s to copy over tarball
	I0719 15:48:23.820155   58376 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:48:21.583230   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.585191   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:23.300116   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:23.800737   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.300641   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:24.800158   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.300678   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:25.800635   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.299778   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.799791   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.299845   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:27.800458   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.710838   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.786269   59208 pod_ready.go:102] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:26.105248   58376 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285062307s)
	I0719 15:48:26.105271   58376 crio.go:469] duration metric: took 2.285164513s to extract the tarball
	I0719 15:48:26.105279   58376 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:48:26.142811   58376 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 15:48:26.185631   58376 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 15:48:26.185660   58376 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:48:26.185668   58376 kubeadm.go:934] updating node { 192.168.72.37 8443 v1.30.3 crio true true} ...
	I0719 15:48:26.185784   58376 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-817144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 15:48:26.185857   58376 ssh_runner.go:195] Run: crio config
	I0719 15:48:26.238150   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:26.238172   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:26.238183   58376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 15:48:26.238211   58376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-817144 NodeName:embed-certs-817144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:48:26.238449   58376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-817144"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:48:26.238515   58376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 15:48:26.249200   58376 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:48:26.249278   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:48:26.258710   58376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 15:48:26.279235   58376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:48:26.299469   58376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0719 15:48:26.317789   58376 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I0719 15:48:26.321564   58376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:48:26.333153   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:26.452270   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:26.469344   58376 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144 for IP: 192.168.72.37
	I0719 15:48:26.469366   58376 certs.go:194] generating shared ca certs ...
	I0719 15:48:26.469382   58376 certs.go:226] acquiring lock for ca certs: {Name:mk638c072f0071983aef143d50a1226fac96a359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:26.469530   58376 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key
	I0719 15:48:26.469586   58376 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key
	I0719 15:48:26.469601   58376 certs.go:256] generating profile certs ...
	I0719 15:48:26.469694   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/client.key
	I0719 15:48:26.469791   58376 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key.928d4c24
	I0719 15:48:26.469846   58376 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key
	I0719 15:48:26.469982   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem (1338 bytes)
	W0719 15:48:26.470021   58376 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012_empty.pem, impossibly tiny 0 bytes
	I0719 15:48:26.470035   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 15:48:26.470071   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:48:26.470105   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:48:26.470140   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/certs/key.pem (1675 bytes)
	I0719 15:48:26.470197   58376 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem (1708 bytes)
	I0719 15:48:26.470812   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:48:26.508455   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 15:48:26.537333   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:48:26.565167   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:48:26.601152   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 15:48:26.636408   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:48:26.669076   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:48:26.695438   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/embed-certs-817144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:48:26.718897   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/ssl/certs/110122.pem --> /usr/share/ca-certificates/110122.pem (1708 bytes)
	I0719 15:48:26.741760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:48:26.764760   58376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-3847/.minikube/certs/11012.pem --> /usr/share/ca-certificates/11012.pem (1338 bytes)
	I0719 15:48:26.787772   58376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:48:26.807332   58376 ssh_runner.go:195] Run: openssl version
	I0719 15:48:26.815182   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11012.pem && ln -fs /usr/share/ca-certificates/11012.pem /etc/ssl/certs/11012.pem"
	I0719 15:48:26.827373   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831926   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:34 /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.831973   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11012.pem
	I0719 15:48:26.837923   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11012.pem /etc/ssl/certs/51391683.0"
	I0719 15:48:26.849158   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110122.pem && ln -fs /usr/share/ca-certificates/110122.pem /etc/ssl/certs/110122.pem"
	I0719 15:48:26.860466   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865178   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:34 /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.865249   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110122.pem
	I0719 15:48:26.870873   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110122.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 15:48:26.882044   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:48:26.893283   58376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897750   58376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:22 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.897809   58376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:48:26.903395   58376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:48:26.914389   58376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 15:48:26.918904   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 15:48:26.924659   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 15:48:26.930521   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 15:48:26.936808   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 15:48:26.942548   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 15:48:26.948139   58376 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 15:48:26.954557   58376 kubeadm.go:392] StartCluster: {Name:embed-certs-817144 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-817144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 15:48:26.954644   58376 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 15:48:26.954722   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:26.994129   58376 cri.go:89] found id: ""
	I0719 15:48:26.994205   58376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:48:27.006601   58376 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 15:48:27.006624   58376 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 15:48:27.006699   58376 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 15:48:27.017166   58376 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:48:27.018580   58376 kubeconfig.go:125] found "embed-certs-817144" server: "https://192.168.72.37:8443"
	I0719 15:48:27.021622   58376 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 15:48:27.033000   58376 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.37
	I0719 15:48:27.033033   58376 kubeadm.go:1160] stopping kube-system containers ...
	I0719 15:48:27.033044   58376 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 15:48:27.033083   58376 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 15:48:27.073611   58376 cri.go:89] found id: ""
	I0719 15:48:27.073678   58376 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 15:48:27.092986   58376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:48:27.103557   58376 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:48:27.103580   58376 kubeadm.go:157] found existing configuration files:
	
	I0719 15:48:27.103636   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:48:27.113687   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:48:27.113752   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:48:27.123696   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:48:27.132928   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:48:27.132984   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:48:27.142566   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.152286   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:48:27.152335   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:48:27.161701   58376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:48:27.171532   58376 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:48:27.171591   58376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:48:27.181229   58376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:48:27.192232   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:27.330656   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.287561   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.513476   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.616308   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:28.704518   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:48:28.704605   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.205265   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:26.082992   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.746255   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:28.300034   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:28.800118   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.300099   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.800538   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.300194   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.800056   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.300473   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:31.799880   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.300181   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:32.800267   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:29.704706   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.204728   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:30.221741   58376 api_server.go:72] duration metric: took 1.517220815s to wait for apiserver process to appear ...
	I0719 15:48:30.221766   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:48:30.221786   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.665104   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.665138   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.665152   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.703238   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 15:48:32.703271   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 15:48:32.722495   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:32.748303   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:32.748344   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.222861   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.227076   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.227104   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:33.722705   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:33.734658   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 15:48:33.734683   58376 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 15:48:34.222279   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:48:34.227870   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:48:34.233621   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:48:34.233646   58376 api_server.go:131] duration metric: took 4.011873202s to wait for apiserver health ...
	I0719 15:48:34.233656   58376 cni.go:84] Creating CNI manager for ""
	I0719 15:48:34.233664   58376 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:48:34.235220   58376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:48:30.210533   59208 pod_ready.go:92] pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.210557   59208 pod_ready.go:81] duration metric: took 8.007151724s for pod "coredns-7db6d8ff4d-z7865" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.210568   59208 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215669   59208 pod_ready.go:92] pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.215692   59208 pod_ready.go:81] duration metric: took 5.116005ms for pod "etcd-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.215702   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222633   59208 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.222655   59208 pod_ready.go:81] duration metric: took 6.947228ms for pod "kube-apiserver-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.222664   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227631   59208 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.227656   59208 pod_ready.go:81] duration metric: took 4.985227ms for pod "kube-controller-manager-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.227667   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405047   59208 pod_ready.go:92] pod "kube-proxy-r7b2z" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.405073   59208 pod_ready.go:81] duration metric: took 177.397954ms for pod "kube-proxy-r7b2z" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.405085   59208 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805843   59208 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:30.805877   59208 pod_ready.go:81] duration metric: took 400.783803ms for pod "kube-scheduler-default-k8s-diff-port-601445" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:30.805890   59208 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:32.821231   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.236303   58376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:48:34.248133   58376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:48:34.270683   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:48:34.279907   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:48:34.279939   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 15:48:34.279946   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 15:48:34.279953   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 15:48:34.279960   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 15:48:34.279966   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 15:48:34.279972   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 15:48:34.279977   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:48:34.279982   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 15:48:34.279988   58376 system_pods.go:74] duration metric: took 9.282886ms to wait for pod list to return data ...
	I0719 15:48:34.279995   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:48:34.283597   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:48:34.283623   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:48:34.283634   58376 node_conditions.go:105] duration metric: took 3.634999ms to run NodePressure ...
	I0719 15:48:34.283649   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 15:48:31.082803   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:33.583510   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:34.586116   58376 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590095   58376 kubeadm.go:739] kubelet initialised
	I0719 15:48:34.590119   58376 kubeadm.go:740] duration metric: took 3.977479ms waiting for restarted kubelet to initialise ...
	I0719 15:48:34.590128   58376 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:34.594987   58376 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.600192   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600212   58376 pod_ready.go:81] duration metric: took 5.205124ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.600220   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.600225   58376 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.603934   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603952   58376 pod_ready.go:81] duration metric: took 3.719853ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.603959   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "etcd-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.603965   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.607778   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607803   58376 pod_ready.go:81] duration metric: took 3.830174ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.607817   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.607826   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:34.673753   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673775   58376 pod_ready.go:81] duration metric: took 65.937586ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:34.673783   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:34.673788   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.075506   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075539   58376 pod_ready.go:81] duration metric: took 401.743578ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.075548   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-proxy-4d4g9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.075554   58376 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.474518   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474546   58376 pod_ready.go:81] duration metric: took 398.985628ms for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.474558   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.474567   58376 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:35.874540   58376 pod_ready.go:97] node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874567   58376 pod_ready.go:81] duration metric: took 399.989978ms for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:48:35.874576   58376 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-817144" hosting pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.874582   58376 pod_ready.go:38] duration metric: took 1.284443879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:35.874646   58376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:48:35.886727   58376 ops.go:34] apiserver oom_adj: -16
	I0719 15:48:35.886751   58376 kubeadm.go:597] duration metric: took 8.880120513s to restartPrimaryControlPlane
	I0719 15:48:35.886760   58376 kubeadm.go:394] duration metric: took 8.932210528s to StartCluster
	I0719 15:48:35.886781   58376 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.886859   58376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:48:35.888389   58376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:48:35.888642   58376 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:48:35.888722   58376 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:48:35.888781   58376 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-817144"
	I0719 15:48:35.888810   58376 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-817144"
	I0719 15:48:35.888824   58376 addons.go:69] Setting default-storageclass=true in profile "embed-certs-817144"
	I0719 15:48:35.888839   58376 addons.go:69] Setting metrics-server=true in profile "embed-certs-817144"
	I0719 15:48:35.888875   58376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-817144"
	I0719 15:48:35.888888   58376 addons.go:234] Setting addon metrics-server=true in "embed-certs-817144"
	W0719 15:48:35.888897   58376 addons.go:243] addon metrics-server should already be in state true
	I0719 15:48:35.888931   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.888840   58376 config.go:182] Loaded profile config "embed-certs-817144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	W0719 15:48:35.888843   58376 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:48:35.889000   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.889231   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889242   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889247   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.889270   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889272   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.889282   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.890641   58376 out.go:177] * Verifying Kubernetes components...
	I0719 15:48:35.892144   58376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:48:35.905134   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0719 15:48:35.905572   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.905788   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0719 15:48:35.906107   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906132   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.906171   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.906496   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.906825   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.906846   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.907126   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.907179   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.907215   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.907289   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.908269   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43345
	I0719 15:48:35.908747   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.909343   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.909367   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.909787   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.910337   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910382   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.910615   58376 addons.go:234] Setting addon default-storageclass=true in "embed-certs-817144"
	W0719 15:48:35.910632   58376 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:48:35.910662   58376 host.go:66] Checking if "embed-certs-817144" exists ...
	I0719 15:48:35.910937   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.910965   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.926165   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0719 15:48:35.926905   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.926944   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0719 15:48:35.927369   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.927573   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927636   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927829   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.927847   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.927959   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928512   58376 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:48:35.928551   58376 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:48:35.928759   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.928824   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 15:48:35.928964   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.929176   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.929546   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.929557   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.929927   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.930278   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.931161   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.931773   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.933234   58376 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:48:35.933298   58376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:48:35.934543   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:48:35.934556   58376 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:48:35.934569   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.934629   58376 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:35.934642   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:48:35.934657   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.938300   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938628   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.938648   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.938679   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939150   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939340   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.939433   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.939479   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.939536   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.939619   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.939673   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.939937   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.940081   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.940190   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:35.947955   58376 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42371
	I0719 15:48:35.948206   58376 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:48:35.948643   58376 main.go:141] libmachine: Using API Version  1
	I0719 15:48:35.948654   58376 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:48:35.948961   58376 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:48:35.949119   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetState
	I0719 15:48:35.950572   58376 main.go:141] libmachine: (embed-certs-817144) Calling .DriverName
	I0719 15:48:35.951770   58376 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:35.951779   58376 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:48:35.951791   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHHostname
	I0719 15:48:35.957009   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957381   58376 main.go:141] libmachine: (embed-certs-817144) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:4e:e4", ip: ""} in network mk-embed-certs-817144: {Iface:virbr2 ExpiryTime:2024-07-19 16:48:10 +0000 UTC Type:0 Mac:52:54:00:7b:4e:e4 Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:embed-certs-817144 Clientid:01:52:54:00:7b:4e:e4}
	I0719 15:48:35.957405   58376 main.go:141] libmachine: (embed-certs-817144) DBG | domain embed-certs-817144 has defined IP address 192.168.72.37 and MAC address 52:54:00:7b:4e:e4 in network mk-embed-certs-817144
	I0719 15:48:35.957550   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHPort
	I0719 15:48:35.957717   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHKeyPath
	I0719 15:48:35.957841   58376 main.go:141] libmachine: (embed-certs-817144) Calling .GetSSHUsername
	I0719 15:48:35.957953   58376 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/embed-certs-817144/id_rsa Username:docker}
	I0719 15:48:36.072337   58376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:48:36.091547   58376 node_ready.go:35] waiting up to 6m0s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:36.182328   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:48:36.195704   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:48:36.195729   58376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:48:36.221099   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:48:36.224606   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:48:36.224632   58376 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:48:36.247264   58376 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:36.247289   58376 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:48:36.300365   58376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:48:37.231670   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010526005s)
	I0719 15:48:37.231729   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231743   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.231765   58376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049406285s)
	I0719 15:48:37.231807   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.231822   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232034   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232085   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232096   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.232100   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.232105   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.232115   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.232345   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.232366   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233486   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233529   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233541   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.233549   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.233792   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.233815   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.233832   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.240487   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.240502   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.240732   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.240754   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.240755   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288064   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288085   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288370   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288389   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288378   58376 main.go:141] libmachine: (embed-certs-817144) DBG | Closing plugin on server side
	I0719 15:48:37.288400   58376 main.go:141] libmachine: Making call to close driver server
	I0719 15:48:37.288406   58376 main.go:141] libmachine: (embed-certs-817144) Calling .Close
	I0719 15:48:37.288595   58376 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:48:37.288606   58376 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:48:37.288652   58376 addons.go:475] Verifying addon metrics-server=true in "embed-certs-817144"
	I0719 15:48:37.290497   58376 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:48:33.300279   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:33.800631   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.300013   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:34.800051   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.300468   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.800383   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.300186   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:36.800623   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.300068   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:37.799841   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:35.314792   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.814653   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.291961   58376 addons.go:510] duration metric: took 1.403238435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:48:38.096793   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:35.584345   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:37.585215   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:38.300002   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:38.800639   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.300564   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.800314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.300642   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:40.799787   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.299849   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:41.799868   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.300242   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:42.800481   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:39.818959   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.313745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:44.314213   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:40.596246   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.095976   58376 node_ready.go:53] node "embed-certs-817144" has status "Ready":"False"
	I0719 15:48:43.595640   58376 node_ready.go:49] node "embed-certs-817144" has status "Ready":"True"
	I0719 15:48:43.595659   58376 node_ready.go:38] duration metric: took 7.504089345s for node "embed-certs-817144" to be "Ready" ...
	I0719 15:48:43.595667   58376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:48:43.600832   58376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605878   58376 pod_ready.go:92] pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.605900   58376 pod_ready.go:81] duration metric: took 5.046391ms for pod "coredns-7db6d8ff4d-n945p" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.605912   58376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610759   58376 pod_ready.go:92] pod "etcd-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.610778   58376 pod_ready.go:81] duration metric: took 4.85915ms for pod "etcd-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.610788   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615239   58376 pod_ready.go:92] pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.615257   58376 pod_ready.go:81] duration metric: took 4.46126ms for pod "kube-apiserver-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.615267   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619789   58376 pod_ready.go:92] pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.619804   58376 pod_ready.go:81] duration metric: took 4.530085ms for pod "kube-controller-manager-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.619814   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998585   58376 pod_ready.go:92] pod "kube-proxy-4d4g9" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:43.998612   58376 pod_ready.go:81] duration metric: took 378.78761ms for pod "kube-proxy-4d4g9" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:43.998622   58376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:40.084033   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:42.582983   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:43.300412   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:43.800211   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.300117   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:44.799821   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.300031   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:45.800676   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.300710   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.800307   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.300265   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:47.800008   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:46.812904   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.313178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:46.004415   58376 pod_ready.go:102] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.006304   58376 pod_ready.go:92] pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace has status "Ready":"True"
	I0719 15:48:48.006329   58376 pod_ready.go:81] duration metric: took 4.00769937s for pod "kube-scheduler-embed-certs-817144" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:48.006339   58376 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	I0719 15:48:45.082973   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:47.582224   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:49.582782   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:48.300512   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:48.799929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:48.799998   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:48.839823   58817 cri.go:89] found id: ""
	I0719 15:48:48.839845   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.839852   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:48.839863   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:48.839920   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:48.874635   58817 cri.go:89] found id: ""
	I0719 15:48:48.874661   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.874671   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:48.874679   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:48.874736   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:48.909391   58817 cri.go:89] found id: ""
	I0719 15:48:48.909417   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.909426   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:48.909431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:48.909491   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:48.951232   58817 cri.go:89] found id: ""
	I0719 15:48:48.951258   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.951265   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:48.951271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:48.951323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:48.984391   58817 cri.go:89] found id: ""
	I0719 15:48:48.984413   58817 logs.go:276] 0 containers: []
	W0719 15:48:48.984420   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:48.984426   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:48.984481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:49.018949   58817 cri.go:89] found id: ""
	I0719 15:48:49.018987   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.018996   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:49.019003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:49.019060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:49.055182   58817 cri.go:89] found id: ""
	I0719 15:48:49.055208   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.055217   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:49.055222   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:49.055270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:49.090341   58817 cri.go:89] found id: ""
	I0719 15:48:49.090364   58817 logs.go:276] 0 containers: []
	W0719 15:48:49.090371   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:49.090378   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:49.090390   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:49.104137   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:49.104166   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:49.239447   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:49.239473   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:49.239489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:49.307270   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:49.307307   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:49.345886   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:49.345925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:51.898153   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:51.911943   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:51.912006   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:51.946512   58817 cri.go:89] found id: ""
	I0719 15:48:51.946562   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.946573   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:51.946603   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:51.946664   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:51.982341   58817 cri.go:89] found id: ""
	I0719 15:48:51.982373   58817 logs.go:276] 0 containers: []
	W0719 15:48:51.982381   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:51.982387   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:51.982441   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:52.019705   58817 cri.go:89] found id: ""
	I0719 15:48:52.019732   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.019739   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:52.019744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:52.019799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:52.057221   58817 cri.go:89] found id: ""
	I0719 15:48:52.057250   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.057262   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:52.057271   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:52.057353   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:52.097277   58817 cri.go:89] found id: ""
	I0719 15:48:52.097306   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.097317   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:52.097325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:52.097389   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:52.136354   58817 cri.go:89] found id: ""
	I0719 15:48:52.136398   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.136406   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:52.136412   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:52.136463   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:52.172475   58817 cri.go:89] found id: ""
	I0719 15:48:52.172502   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.172510   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:52.172516   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:52.172565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:52.209164   58817 cri.go:89] found id: ""
	I0719 15:48:52.209192   58817 logs.go:276] 0 containers: []
	W0719 15:48:52.209204   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:52.209214   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:52.209238   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:52.260069   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:52.260101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:52.274794   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:52.274825   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:52.356599   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:52.356628   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:52.356650   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:52.427582   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:52.427630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:51.814049   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:53.815503   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:50.015637   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:52.515491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:51.583726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.083179   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:54.977864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:54.993571   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:54.993645   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:55.034576   58817 cri.go:89] found id: ""
	I0719 15:48:55.034630   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.034641   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:55.034649   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:55.034712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:55.068305   58817 cri.go:89] found id: ""
	I0719 15:48:55.068332   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.068343   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:55.068350   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:55.068408   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:55.106192   58817 cri.go:89] found id: ""
	I0719 15:48:55.106220   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.106227   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:55.106248   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:55.106304   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:55.141287   58817 cri.go:89] found id: ""
	I0719 15:48:55.141318   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.141328   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:55.141334   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:55.141391   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:55.179965   58817 cri.go:89] found id: ""
	I0719 15:48:55.179989   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.179999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:55.180007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:55.180065   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.213558   58817 cri.go:89] found id: ""
	I0719 15:48:55.213588   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.213598   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:55.213607   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:55.213663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:55.247201   58817 cri.go:89] found id: ""
	I0719 15:48:55.247230   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.247243   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:55.247250   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:55.247309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:55.283157   58817 cri.go:89] found id: ""
	I0719 15:48:55.283191   58817 logs.go:276] 0 containers: []
	W0719 15:48:55.283200   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:55.283211   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:55.283228   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:55.361089   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:55.361116   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:55.361134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:55.437784   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:55.437819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:48:55.480735   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:55.480770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:55.534013   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:55.534045   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.048567   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:48:58.063073   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:48:58.063146   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:48:58.100499   58817 cri.go:89] found id: ""
	I0719 15:48:58.100527   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.100538   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:48:58.100545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:48:58.100612   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:48:58.136885   58817 cri.go:89] found id: ""
	I0719 15:48:58.136913   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.136924   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:48:58.136932   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:48:58.137000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:48:58.172034   58817 cri.go:89] found id: ""
	I0719 15:48:58.172064   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.172074   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:48:58.172081   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:48:58.172135   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:48:58.209113   58817 cri.go:89] found id: ""
	I0719 15:48:58.209145   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.209157   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:48:58.209166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:48:58.209256   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:48:58.258903   58817 cri.go:89] found id: ""
	I0719 15:48:58.258938   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.258949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:48:58.258957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:48:58.259016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:48:55.816000   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.817771   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:55.014213   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:57.014730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:56.083381   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.088572   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:58.312314   58817 cri.go:89] found id: ""
	I0719 15:48:58.312342   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.312353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:48:58.312361   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:48:58.312421   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:48:58.349566   58817 cri.go:89] found id: ""
	I0719 15:48:58.349628   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.349638   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:48:58.349645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:48:58.349709   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:48:58.383834   58817 cri.go:89] found id: ""
	I0719 15:48:58.383863   58817 logs.go:276] 0 containers: []
	W0719 15:48:58.383880   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:48:58.383893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:48:58.383907   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:48:58.436984   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:48:58.437020   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:48:58.450460   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:48:58.450489   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:48:58.523392   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:48:58.523408   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:48:58.523420   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:48:58.601407   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:48:58.601439   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:01.141864   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:01.155908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:01.155965   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:01.191492   58817 cri.go:89] found id: ""
	I0719 15:49:01.191524   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.191534   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:01.191542   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:01.191623   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:01.227615   58817 cri.go:89] found id: ""
	I0719 15:49:01.227646   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.227653   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:01.227659   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:01.227716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:01.262624   58817 cri.go:89] found id: ""
	I0719 15:49:01.262647   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.262655   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:01.262661   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:01.262717   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:01.298328   58817 cri.go:89] found id: ""
	I0719 15:49:01.298358   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.298370   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:01.298378   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:01.298439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:01.333181   58817 cri.go:89] found id: ""
	I0719 15:49:01.333208   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.333218   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:01.333225   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:01.333284   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:01.369952   58817 cri.go:89] found id: ""
	I0719 15:49:01.369980   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.369990   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:01.369997   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:01.370076   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:01.405232   58817 cri.go:89] found id: ""
	I0719 15:49:01.405263   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.405273   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:01.405280   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:01.405340   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:01.442960   58817 cri.go:89] found id: ""
	I0719 15:49:01.442989   58817 logs.go:276] 0 containers: []
	W0719 15:49:01.442999   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:01.443009   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:01.443036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:01.493680   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:01.493712   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:01.506699   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:01.506732   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:01.586525   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:01.586547   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:01.586562   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:01.673849   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:01.673897   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:00.313552   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:02.812079   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:48:59.513087   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:01.514094   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.013514   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:00.583159   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:03.082968   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:04.219314   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:04.233386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:04.233481   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:04.274762   58817 cri.go:89] found id: ""
	I0719 15:49:04.274792   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.274802   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:04.274826   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:04.274881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:04.312047   58817 cri.go:89] found id: ""
	I0719 15:49:04.312073   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.312082   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:04.312089   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:04.312164   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:04.351258   58817 cri.go:89] found id: ""
	I0719 15:49:04.351293   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.351307   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:04.351314   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:04.351373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:04.385969   58817 cri.go:89] found id: ""
	I0719 15:49:04.385994   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.386002   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:04.386007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:04.386054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:04.425318   58817 cri.go:89] found id: ""
	I0719 15:49:04.425342   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.425351   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:04.425358   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:04.425416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:04.462578   58817 cri.go:89] found id: ""
	I0719 15:49:04.462607   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.462618   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:04.462626   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:04.462682   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:04.502967   58817 cri.go:89] found id: ""
	I0719 15:49:04.502999   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.503017   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:04.503025   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:04.503084   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:04.540154   58817 cri.go:89] found id: ""
	I0719 15:49:04.540185   58817 logs.go:276] 0 containers: []
	W0719 15:49:04.540195   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:04.540230   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:04.540246   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:04.596126   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:04.596164   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:04.610468   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:04.610509   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:04.683759   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:04.683783   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:04.683803   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:04.764758   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:04.764796   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.303933   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:07.317959   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:07.318031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:07.356462   58817 cri.go:89] found id: ""
	I0719 15:49:07.356490   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.356498   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:07.356511   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:07.356566   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:07.391533   58817 cri.go:89] found id: ""
	I0719 15:49:07.391563   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.391574   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:07.391582   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:07.391662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:07.427877   58817 cri.go:89] found id: ""
	I0719 15:49:07.427914   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.427922   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:07.427927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:07.428005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:07.464667   58817 cri.go:89] found id: ""
	I0719 15:49:07.464691   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.464699   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:07.464704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:07.464768   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:07.499296   58817 cri.go:89] found id: ""
	I0719 15:49:07.499321   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.499329   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:07.499336   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:07.499400   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:07.541683   58817 cri.go:89] found id: ""
	I0719 15:49:07.541715   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.541726   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:07.541733   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:07.541791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:07.577698   58817 cri.go:89] found id: ""
	I0719 15:49:07.577726   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.577737   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:07.577744   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:07.577799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:07.613871   58817 cri.go:89] found id: ""
	I0719 15:49:07.613904   58817 logs.go:276] 0 containers: []
	W0719 15:49:07.613914   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:07.613926   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:07.613942   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:07.690982   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:07.691006   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:07.691021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:07.778212   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:07.778277   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:07.820821   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:07.820866   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:07.873053   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:07.873097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:05.312525   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.812891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:06.013654   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:08.015552   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:05.083931   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:07.583371   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.387941   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:10.401132   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:10.401205   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:10.437084   58817 cri.go:89] found id: ""
	I0719 15:49:10.437112   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.437120   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:10.437178   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:10.437243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:10.472675   58817 cri.go:89] found id: ""
	I0719 15:49:10.472703   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.472712   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:10.472720   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:10.472780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:10.506448   58817 cri.go:89] found id: ""
	I0719 15:49:10.506480   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.506490   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:10.506497   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:10.506544   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:10.542574   58817 cri.go:89] found id: ""
	I0719 15:49:10.542604   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.542612   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:10.542618   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:10.542701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:10.575963   58817 cri.go:89] found id: ""
	I0719 15:49:10.575990   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.575999   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:10.576005   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:10.576063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:10.614498   58817 cri.go:89] found id: ""
	I0719 15:49:10.614529   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.614539   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:10.614548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:10.614613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:10.652802   58817 cri.go:89] found id: ""
	I0719 15:49:10.652825   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.652833   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:10.652838   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:10.652886   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:10.688985   58817 cri.go:89] found id: ""
	I0719 15:49:10.689019   58817 logs.go:276] 0 containers: []
	W0719 15:49:10.689029   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:10.689041   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:10.689058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:10.741552   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:10.741586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:10.756514   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:10.756542   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:10.837916   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:10.837940   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:10.837956   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:10.919878   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:10.919924   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:09.824389   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.312960   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.512671   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.513359   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:10.082891   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:12.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:14.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:13.462603   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:13.476387   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:13.476449   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:13.514170   58817 cri.go:89] found id: ""
	I0719 15:49:13.514195   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.514205   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:13.514211   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:13.514281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:13.548712   58817 cri.go:89] found id: ""
	I0719 15:49:13.548739   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.548747   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:13.548753   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:13.548808   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:13.582623   58817 cri.go:89] found id: ""
	I0719 15:49:13.582648   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.582657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:13.582664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:13.582721   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:13.619343   58817 cri.go:89] found id: ""
	I0719 15:49:13.619369   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.619379   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:13.619385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:13.619444   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:13.655755   58817 cri.go:89] found id: ""
	I0719 15:49:13.655785   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.655793   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:13.655798   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:13.655856   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:13.691021   58817 cri.go:89] found id: ""
	I0719 15:49:13.691104   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.691124   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:13.691133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:13.691196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:13.728354   58817 cri.go:89] found id: ""
	I0719 15:49:13.728380   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.728390   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:13.728397   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:13.728459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:13.764498   58817 cri.go:89] found id: ""
	I0719 15:49:13.764526   58817 logs.go:276] 0 containers: []
	W0719 15:49:13.764535   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:13.764544   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:13.764557   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:13.803474   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:13.803500   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:13.854709   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:13.854742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:13.870499   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:13.870526   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:13.943250   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:13.943270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:13.943282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.525806   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:16.539483   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:16.539558   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:16.574003   58817 cri.go:89] found id: ""
	I0719 15:49:16.574032   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.574043   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:16.574050   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:16.574112   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:16.610637   58817 cri.go:89] found id: ""
	I0719 15:49:16.610668   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.610676   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:16.610682   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:16.610731   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:16.648926   58817 cri.go:89] found id: ""
	I0719 15:49:16.648957   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.648968   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:16.648975   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:16.649027   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:16.682819   58817 cri.go:89] found id: ""
	I0719 15:49:16.682848   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.682859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:16.682866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:16.682919   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:16.719879   58817 cri.go:89] found id: ""
	I0719 15:49:16.719912   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.719922   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:16.719930   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:16.719988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:16.755776   58817 cri.go:89] found id: ""
	I0719 15:49:16.755809   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.755820   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:16.755829   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:16.755903   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:16.792158   58817 cri.go:89] found id: ""
	I0719 15:49:16.792186   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.792193   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:16.792199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:16.792260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:16.829694   58817 cri.go:89] found id: ""
	I0719 15:49:16.829722   58817 logs.go:276] 0 containers: []
	W0719 15:49:16.829733   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:16.829741   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:16.829761   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:16.843522   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:16.843552   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:16.914025   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:16.914047   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:16.914063   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:16.996672   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:16.996709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:17.042138   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:17.042170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:14.813090   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.311701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:15.014386   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:17.513993   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:16.584566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.082569   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:19.597598   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:19.611433   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:19.611487   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:19.646047   58817 cri.go:89] found id: ""
	I0719 15:49:19.646073   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.646080   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:19.646086   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:19.646145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:19.683589   58817 cri.go:89] found id: ""
	I0719 15:49:19.683620   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.683632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:19.683643   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:19.683701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:19.722734   58817 cri.go:89] found id: ""
	I0719 15:49:19.722761   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.722771   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:19.722778   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:19.722836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:19.759418   58817 cri.go:89] found id: ""
	I0719 15:49:19.759445   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.759454   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:19.759459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:19.759522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:19.795168   58817 cri.go:89] found id: ""
	I0719 15:49:19.795193   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.795201   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:19.795206   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:19.795259   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:19.830930   58817 cri.go:89] found id: ""
	I0719 15:49:19.830959   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.830969   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:19.830976   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:19.831035   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:19.866165   58817 cri.go:89] found id: ""
	I0719 15:49:19.866187   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.866195   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:19.866201   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:19.866252   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:19.899415   58817 cri.go:89] found id: ""
	I0719 15:49:19.899446   58817 logs.go:276] 0 containers: []
	W0719 15:49:19.899456   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:19.899467   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:19.899482   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:19.950944   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:19.950975   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:19.964523   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:19.964545   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:20.032244   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:20.032270   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:20.032290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:20.110285   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:20.110317   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:22.650693   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:22.666545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:22.666618   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:22.709820   58817 cri.go:89] found id: ""
	I0719 15:49:22.709846   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.709854   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:22.709860   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:22.709905   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:22.745373   58817 cri.go:89] found id: ""
	I0719 15:49:22.745398   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.745406   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:22.745411   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:22.745461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:22.785795   58817 cri.go:89] found id: ""
	I0719 15:49:22.785828   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.785838   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:22.785846   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:22.785904   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:22.826542   58817 cri.go:89] found id: ""
	I0719 15:49:22.826569   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.826579   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:22.826587   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:22.826648   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:22.866761   58817 cri.go:89] found id: ""
	I0719 15:49:22.866789   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.866800   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:22.866807   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:22.866868   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:22.913969   58817 cri.go:89] found id: ""
	I0719 15:49:22.913999   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.914009   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:22.914017   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:22.914082   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:22.950230   58817 cri.go:89] found id: ""
	I0719 15:49:22.950287   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.950298   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:22.950305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:22.950366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:22.986400   58817 cri.go:89] found id: ""
	I0719 15:49:22.986424   58817 logs.go:276] 0 containers: []
	W0719 15:49:22.986434   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:22.986446   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:22.986460   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:23.072119   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:23.072153   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:23.111021   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:23.111053   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:23.161490   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:23.161518   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:23.174729   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:23.174766   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:23.251205   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:19.814129   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.814762   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:23.817102   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:20.012767   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:22.512467   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:21.587074   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:24.082829   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.752355   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:25.765501   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:25.765559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:25.801073   58817 cri.go:89] found id: ""
	I0719 15:49:25.801107   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.801117   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:25.801126   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:25.801187   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:25.839126   58817 cri.go:89] found id: ""
	I0719 15:49:25.839151   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.839158   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:25.839163   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:25.839210   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:25.873081   58817 cri.go:89] found id: ""
	I0719 15:49:25.873110   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.873120   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:25.873134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:25.873183   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:25.908874   58817 cri.go:89] found id: ""
	I0719 15:49:25.908910   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.908921   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:25.908929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:25.908988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:25.945406   58817 cri.go:89] found id: ""
	I0719 15:49:25.945431   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.945439   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:25.945445   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:25.945515   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:25.978276   58817 cri.go:89] found id: ""
	I0719 15:49:25.978298   58817 logs.go:276] 0 containers: []
	W0719 15:49:25.978306   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:25.978312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:25.978359   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:26.013749   58817 cri.go:89] found id: ""
	I0719 15:49:26.013776   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.013786   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:26.013792   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:26.013840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:26.046225   58817 cri.go:89] found id: ""
	I0719 15:49:26.046269   58817 logs.go:276] 0 containers: []
	W0719 15:49:26.046280   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:26.046290   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:26.046305   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:26.086785   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:26.086808   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:26.138746   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:26.138777   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:26.152114   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:26.152139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:26.224234   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:26.224262   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:26.224279   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:26.312496   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.312687   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:25.015437   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:27.514515   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:26.084854   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.584103   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:28.802738   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:28.817246   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:28.817321   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:28.852398   58817 cri.go:89] found id: ""
	I0719 15:49:28.852429   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.852437   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:28.852449   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:28.852500   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:28.890337   58817 cri.go:89] found id: ""
	I0719 15:49:28.890368   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.890378   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:28.890386   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:28.890446   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:28.929083   58817 cri.go:89] found id: ""
	I0719 15:49:28.929106   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.929113   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:28.929119   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:28.929173   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:28.967708   58817 cri.go:89] found id: ""
	I0719 15:49:28.967735   58817 logs.go:276] 0 containers: []
	W0719 15:49:28.967745   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:28.967752   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:28.967812   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:29.001087   58817 cri.go:89] found id: ""
	I0719 15:49:29.001115   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.001131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:29.001139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:29.001198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:29.039227   58817 cri.go:89] found id: ""
	I0719 15:49:29.039258   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.039268   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:29.039275   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:29.039333   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:29.079927   58817 cri.go:89] found id: ""
	I0719 15:49:29.079955   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.079965   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:29.079973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:29.080037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:29.115035   58817 cri.go:89] found id: ""
	I0719 15:49:29.115060   58817 logs.go:276] 0 containers: []
	W0719 15:49:29.115070   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:29.115080   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:29.115094   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:29.168452   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:29.168487   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:29.182483   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:29.182517   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:29.256139   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:29.256177   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:29.256193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:29.342435   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:29.342472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:31.888988   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:31.902450   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:31.902524   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:31.940007   58817 cri.go:89] found id: ""
	I0719 15:49:31.940035   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.940045   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:31.940053   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:31.940111   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:31.978055   58817 cri.go:89] found id: ""
	I0719 15:49:31.978089   58817 logs.go:276] 0 containers: []
	W0719 15:49:31.978101   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:31.978109   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:31.978168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:32.011666   58817 cri.go:89] found id: ""
	I0719 15:49:32.011697   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.011707   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:32.011714   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:32.011779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:32.046326   58817 cri.go:89] found id: ""
	I0719 15:49:32.046363   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.046373   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:32.046383   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:32.046447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:32.082387   58817 cri.go:89] found id: ""
	I0719 15:49:32.082416   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.082425   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:32.082432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:32.082488   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:32.118653   58817 cri.go:89] found id: ""
	I0719 15:49:32.118693   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.118703   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:32.118710   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:32.118769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:32.154053   58817 cri.go:89] found id: ""
	I0719 15:49:32.154075   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.154082   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:32.154088   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:32.154134   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:32.189242   58817 cri.go:89] found id: ""
	I0719 15:49:32.189272   58817 logs.go:276] 0 containers: []
	W0719 15:49:32.189283   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:32.189293   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:32.189309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:32.263285   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:32.263313   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:32.263329   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:32.341266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:32.341302   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:32.380827   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:32.380852   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:32.432888   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:32.432922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:30.313153   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:32.812075   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:29.514963   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.515163   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.014174   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:31.083793   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:33.083838   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:34.948894   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:34.963787   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:34.963840   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:35.000752   58817 cri.go:89] found id: ""
	I0719 15:49:35.000782   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.000788   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:35.000794   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:35.000849   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:35.038325   58817 cri.go:89] found id: ""
	I0719 15:49:35.038355   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.038367   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:35.038375   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:35.038433   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:35.074945   58817 cri.go:89] found id: ""
	I0719 15:49:35.074972   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.074981   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:35.074987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:35.075031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:35.111644   58817 cri.go:89] found id: ""
	I0719 15:49:35.111671   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.111681   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:35.111688   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:35.111746   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:35.146101   58817 cri.go:89] found id: ""
	I0719 15:49:35.146132   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.146141   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:35.146148   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:35.146198   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:35.185147   58817 cri.go:89] found id: ""
	I0719 15:49:35.185173   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.185181   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:35.185188   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:35.185233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:35.227899   58817 cri.go:89] found id: ""
	I0719 15:49:35.227931   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.227941   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:35.227949   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:35.228010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:35.265417   58817 cri.go:89] found id: ""
	I0719 15:49:35.265441   58817 logs.go:276] 0 containers: []
	W0719 15:49:35.265451   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:35.265462   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:35.265477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:35.316534   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:35.316567   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:35.330131   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:35.330154   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:35.401068   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:35.401091   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:35.401107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:35.477126   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:35.477170   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.019443   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:38.035957   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:38.036032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:38.078249   58817 cri.go:89] found id: ""
	I0719 15:49:38.078278   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.078288   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:38.078296   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:38.078367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:38.125072   58817 cri.go:89] found id: ""
	I0719 15:49:38.125098   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.125106   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:38.125112   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:38.125171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:38.165134   58817 cri.go:89] found id: ""
	I0719 15:49:38.165160   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.165170   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:38.165178   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:38.165233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:38.204968   58817 cri.go:89] found id: ""
	I0719 15:49:38.204995   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.205004   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:38.205013   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:38.205074   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:38.237132   58817 cri.go:89] found id: ""
	I0719 15:49:38.237157   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.237167   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:38.237174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:38.237231   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:34.812542   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.311929   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.312244   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:36.513892   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:39.013261   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:35.084098   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:37.587696   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:38.274661   58817 cri.go:89] found id: ""
	I0719 15:49:38.274691   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.274699   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:38.274704   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:38.274747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:38.311326   58817 cri.go:89] found id: ""
	I0719 15:49:38.311354   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.311365   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:38.311372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:38.311428   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:38.348071   58817 cri.go:89] found id: ""
	I0719 15:49:38.348099   58817 logs.go:276] 0 containers: []
	W0719 15:49:38.348110   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:38.348120   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:38.348134   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:38.432986   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:38.433021   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:38.472439   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:38.472486   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:38.526672   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:38.526706   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:38.540777   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:38.540800   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:38.617657   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.118442   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:41.131935   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:41.132016   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:41.164303   58817 cri.go:89] found id: ""
	I0719 15:49:41.164330   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.164342   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:41.164348   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:41.164396   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:41.197878   58817 cri.go:89] found id: ""
	I0719 15:49:41.197901   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.197909   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:41.197927   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:41.197979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:41.231682   58817 cri.go:89] found id: ""
	I0719 15:49:41.231712   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.231722   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:41.231730   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:41.231793   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:41.268328   58817 cri.go:89] found id: ""
	I0719 15:49:41.268354   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.268364   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:41.268372   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:41.268422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:41.306322   58817 cri.go:89] found id: ""
	I0719 15:49:41.306350   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.306358   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:41.306365   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:41.306416   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:41.342332   58817 cri.go:89] found id: ""
	I0719 15:49:41.342361   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.342372   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:41.342379   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:41.342440   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:41.378326   58817 cri.go:89] found id: ""
	I0719 15:49:41.378352   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.378362   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:41.378371   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:41.378422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:41.410776   58817 cri.go:89] found id: ""
	I0719 15:49:41.410804   58817 logs.go:276] 0 containers: []
	W0719 15:49:41.410814   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:41.410824   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:41.410843   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:41.424133   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:41.424157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:41.498684   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:41.498764   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:41.498784   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:41.583440   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:41.583472   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:41.624962   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:41.624998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:41.313207   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.815916   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:41.013495   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:43.513445   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:40.082726   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:42.583599   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.584503   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:44.177094   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:44.191411   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:44.191466   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:44.226809   58817 cri.go:89] found id: ""
	I0719 15:49:44.226837   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.226847   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:44.226855   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:44.226951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:44.262361   58817 cri.go:89] found id: ""
	I0719 15:49:44.262391   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.262402   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:44.262408   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:44.262452   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:44.295729   58817 cri.go:89] found id: ""
	I0719 15:49:44.295758   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.295768   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:44.295775   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:44.295836   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:44.330968   58817 cri.go:89] found id: ""
	I0719 15:49:44.330996   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.331005   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:44.331012   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:44.331068   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:44.367914   58817 cri.go:89] found id: ""
	I0719 15:49:44.367937   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.367945   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:44.367951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:44.368005   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:44.401127   58817 cri.go:89] found id: ""
	I0719 15:49:44.401151   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.401159   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:44.401164   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:44.401207   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:44.435696   58817 cri.go:89] found id: ""
	I0719 15:49:44.435724   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.435734   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:44.435741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:44.435803   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:44.481553   58817 cri.go:89] found id: ""
	I0719 15:49:44.481582   58817 logs.go:276] 0 containers: []
	W0719 15:49:44.481592   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:44.481603   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:44.481618   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:44.573147   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:44.573181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:44.618556   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:44.618580   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:44.673328   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:44.673364   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:44.687806   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:44.687835   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:44.763624   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.264039   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:47.277902   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:47.277984   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:47.318672   58817 cri.go:89] found id: ""
	I0719 15:49:47.318702   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.318713   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:47.318720   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:47.318780   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:47.360410   58817 cri.go:89] found id: ""
	I0719 15:49:47.360434   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.360444   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:47.360451   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:47.360507   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:47.397890   58817 cri.go:89] found id: ""
	I0719 15:49:47.397918   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.397925   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:47.397931   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:47.397981   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:47.438930   58817 cri.go:89] found id: ""
	I0719 15:49:47.438960   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.438971   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:47.438981   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:47.439040   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:47.479242   58817 cri.go:89] found id: ""
	I0719 15:49:47.479267   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.479277   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:47.479285   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:47.479341   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:47.518583   58817 cri.go:89] found id: ""
	I0719 15:49:47.518610   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.518620   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:47.518628   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:47.518686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:47.553714   58817 cri.go:89] found id: ""
	I0719 15:49:47.553736   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.553744   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:47.553750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:47.553798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:47.591856   58817 cri.go:89] found id: ""
	I0719 15:49:47.591879   58817 logs.go:276] 0 containers: []
	W0719 15:49:47.591886   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:47.591893   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:47.591904   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:47.644911   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:47.644951   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:47.659718   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:47.659742   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:47.735693   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:47.735713   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:47.735727   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:47.816090   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:47.816121   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:46.313534   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.811536   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:46.012299   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:48.515396   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:47.082848   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:49.083291   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.358703   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:50.373832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:50.373908   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:50.408598   58817 cri.go:89] found id: ""
	I0719 15:49:50.408640   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.408649   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:50.408655   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:50.408701   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:50.446067   58817 cri.go:89] found id: ""
	I0719 15:49:50.446096   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.446104   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:50.446110   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:50.446152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:50.480886   58817 cri.go:89] found id: ""
	I0719 15:49:50.480918   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.480927   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:50.480933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:50.480997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:50.514680   58817 cri.go:89] found id: ""
	I0719 15:49:50.514707   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.514717   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:50.514724   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:50.514779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:50.550829   58817 cri.go:89] found id: ""
	I0719 15:49:50.550854   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.550861   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:50.550866   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:50.550910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:50.585407   58817 cri.go:89] found id: ""
	I0719 15:49:50.585434   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.585444   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:50.585452   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:50.585511   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:50.623083   58817 cri.go:89] found id: ""
	I0719 15:49:50.623110   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.623121   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:50.623129   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:50.623181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:50.667231   58817 cri.go:89] found id: ""
	I0719 15:49:50.667258   58817 logs.go:276] 0 containers: []
	W0719 15:49:50.667266   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:50.667274   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:50.667290   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:50.718998   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:50.719032   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:50.733560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:50.733595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:50.800276   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:50.800298   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:50.800310   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:50.881314   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:50.881354   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:50.813781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:52.817124   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:50.516602   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.012716   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:51.083390   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.583030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:53.427179   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:53.444191   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:53.444250   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:53.481092   58817 cri.go:89] found id: ""
	I0719 15:49:53.481125   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.481135   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:53.481143   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:53.481202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:53.517308   58817 cri.go:89] found id: ""
	I0719 15:49:53.517332   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.517340   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:53.517345   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:53.517390   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:53.552638   58817 cri.go:89] found id: ""
	I0719 15:49:53.552667   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.552677   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:53.552684   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:53.552750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:53.587003   58817 cri.go:89] found id: ""
	I0719 15:49:53.587027   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.587034   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:53.587044   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:53.587093   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:53.620361   58817 cri.go:89] found id: ""
	I0719 15:49:53.620389   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.620399   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:53.620406   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:53.620464   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:53.659231   58817 cri.go:89] found id: ""
	I0719 15:49:53.659255   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.659262   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:53.659267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:53.659323   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:53.695312   58817 cri.go:89] found id: ""
	I0719 15:49:53.695345   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.695355   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:53.695362   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:53.695430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:53.735670   58817 cri.go:89] found id: ""
	I0719 15:49:53.735698   58817 logs.go:276] 0 containers: []
	W0719 15:49:53.735708   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:53.735718   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:53.735733   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:53.750912   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:53.750940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:53.818038   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:53.818064   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:53.818077   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:53.902200   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:53.902259   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:53.945805   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:53.945847   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.498178   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:56.511454   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:56.511541   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:56.548043   58817 cri.go:89] found id: ""
	I0719 15:49:56.548070   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.548081   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:56.548089   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:56.548149   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:56.583597   58817 cri.go:89] found id: ""
	I0719 15:49:56.583620   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.583632   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:56.583651   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:56.583710   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:56.622673   58817 cri.go:89] found id: ""
	I0719 15:49:56.622704   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.622714   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:56.622722   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:56.622785   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:56.659663   58817 cri.go:89] found id: ""
	I0719 15:49:56.659691   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.659702   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:56.659711   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:56.659764   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:56.694072   58817 cri.go:89] found id: ""
	I0719 15:49:56.694097   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.694105   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:56.694111   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:56.694158   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:56.730104   58817 cri.go:89] found id: ""
	I0719 15:49:56.730131   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.730139   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:56.730144   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:56.730202   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:56.762952   58817 cri.go:89] found id: ""
	I0719 15:49:56.762977   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.762988   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:56.762995   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:56.763059   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:56.800091   58817 cri.go:89] found id: ""
	I0719 15:49:56.800114   58817 logs.go:276] 0 containers: []
	W0719 15:49:56.800122   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:56.800130   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:49:56.800141   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:56.843328   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:49:56.843363   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:49:56.894700   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:49:56.894734   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:49:56.908975   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:56.908999   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:56.980062   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:56.980087   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:56.980099   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:49:55.312032   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.813778   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:55.013719   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:57.014070   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:56.083506   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:58.582593   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.557467   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:49:59.571083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:49:59.571151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:49:59.606593   58817 cri.go:89] found id: ""
	I0719 15:49:59.606669   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.606680   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:49:59.606688   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:49:59.606743   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:49:59.643086   58817 cri.go:89] found id: ""
	I0719 15:49:59.643115   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.643126   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:49:59.643134   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:49:59.643188   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:49:59.678976   58817 cri.go:89] found id: ""
	I0719 15:49:59.678995   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.679002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:49:59.679008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:49:59.679060   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:49:59.713450   58817 cri.go:89] found id: ""
	I0719 15:49:59.713483   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.713490   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:49:59.713495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:49:59.713540   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:49:59.749902   58817 cri.go:89] found id: ""
	I0719 15:49:59.749924   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.749932   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:49:59.749938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:49:59.749985   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:49:59.793298   58817 cri.go:89] found id: ""
	I0719 15:49:59.793327   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.793335   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:49:59.793341   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:49:59.793399   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:49:59.835014   58817 cri.go:89] found id: ""
	I0719 15:49:59.835040   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.835047   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:49:59.835053   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:49:59.835101   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:49:59.874798   58817 cri.go:89] found id: ""
	I0719 15:49:59.874824   58817 logs.go:276] 0 containers: []
	W0719 15:49:59.874831   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:49:59.874840   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:49:59.874851   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:49:59.948173   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:49:59.948195   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:49:59.948210   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:00.026793   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:00.026828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:00.066659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:00.066687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:00.119005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:00.119036   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:02.634375   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:02.648845   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:02.648918   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:02.683204   58817 cri.go:89] found id: ""
	I0719 15:50:02.683231   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.683240   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:02.683246   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:02.683308   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:02.718869   58817 cri.go:89] found id: ""
	I0719 15:50:02.718901   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.718914   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:02.718921   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:02.718979   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:02.758847   58817 cri.go:89] found id: ""
	I0719 15:50:02.758874   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.758885   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:02.758892   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:02.758951   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:02.800199   58817 cri.go:89] found id: ""
	I0719 15:50:02.800230   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.800238   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:02.800243   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:02.800289   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:02.840302   58817 cri.go:89] found id: ""
	I0719 15:50:02.840334   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.840345   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:02.840353   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:02.840415   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:02.874769   58817 cri.go:89] found id: ""
	I0719 15:50:02.874794   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.874801   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:02.874818   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:02.874885   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:02.914492   58817 cri.go:89] found id: ""
	I0719 15:50:02.914522   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.914532   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:02.914540   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:02.914601   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:02.951548   58817 cri.go:89] found id: ""
	I0719 15:50:02.951577   58817 logs.go:276] 0 containers: []
	W0719 15:50:02.951588   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:02.951599   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:02.951613   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:03.003081   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:03.003118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:03.017738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:03.017767   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:03.090925   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:03.090947   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:03.090958   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:03.169066   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:03.169101   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:49:59.815894   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.312541   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:49:59.513158   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:02.013500   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:00.583268   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:03.082967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.712269   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:05.724799   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:05.724872   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:05.759074   58817 cri.go:89] found id: ""
	I0719 15:50:05.759101   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.759108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:05.759113   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:05.759169   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:05.798316   58817 cri.go:89] found id: ""
	I0719 15:50:05.798413   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.798432   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:05.798442   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:05.798504   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:05.834861   58817 cri.go:89] found id: ""
	I0719 15:50:05.834890   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.834898   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:05.834903   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:05.834962   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:05.868547   58817 cri.go:89] found id: ""
	I0719 15:50:05.868574   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.868582   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:05.868588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:05.868691   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:05.903684   58817 cri.go:89] found id: ""
	I0719 15:50:05.903718   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.903730   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:05.903738   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:05.903798   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:05.938521   58817 cri.go:89] found id: ""
	I0719 15:50:05.938552   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.938567   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:05.938576   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:05.938628   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:05.973683   58817 cri.go:89] found id: ""
	I0719 15:50:05.973710   58817 logs.go:276] 0 containers: []
	W0719 15:50:05.973717   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:05.973723   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:05.973825   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:06.010528   58817 cri.go:89] found id: ""
	I0719 15:50:06.010559   58817 logs.go:276] 0 containers: []
	W0719 15:50:06.010569   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:06.010580   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:06.010593   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:06.053090   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:06.053145   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:06.106906   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:06.106939   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:06.121914   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:06.121944   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:06.197465   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:06.197492   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:06.197507   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:04.814326   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.314104   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:04.513144   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.013900   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.014269   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:05.582967   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:07.583076   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:09.583550   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:08.782285   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:08.795115   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:08.795180   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:08.834264   58817 cri.go:89] found id: ""
	I0719 15:50:08.834295   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.834306   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:08.834314   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:08.834371   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:08.873227   58817 cri.go:89] found id: ""
	I0719 15:50:08.873258   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.873268   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:08.873276   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:08.873330   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:08.907901   58817 cri.go:89] found id: ""
	I0719 15:50:08.907929   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.907940   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:08.907948   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:08.908011   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:08.941350   58817 cri.go:89] found id: ""
	I0719 15:50:08.941381   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.941391   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:08.941400   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:08.941453   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:08.978469   58817 cri.go:89] found id: ""
	I0719 15:50:08.978495   58817 logs.go:276] 0 containers: []
	W0719 15:50:08.978502   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:08.978508   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:08.978563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:09.017469   58817 cri.go:89] found id: ""
	I0719 15:50:09.017492   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.017501   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:09.017509   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:09.017563   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:09.056675   58817 cri.go:89] found id: ""
	I0719 15:50:09.056703   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.056711   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:09.056718   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:09.056769   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:09.096655   58817 cri.go:89] found id: ""
	I0719 15:50:09.096680   58817 logs.go:276] 0 containers: []
	W0719 15:50:09.096688   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:09.096696   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:09.096710   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:09.135765   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:09.135791   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:09.189008   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:09.189044   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:09.203988   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:09.204014   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:09.278418   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:09.278440   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:09.278453   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:11.857017   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:11.870592   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:11.870650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:11.907057   58817 cri.go:89] found id: ""
	I0719 15:50:11.907088   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.907097   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:11.907103   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:11.907152   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:11.944438   58817 cri.go:89] found id: ""
	I0719 15:50:11.944466   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.944476   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:11.944484   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:11.944547   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:11.986506   58817 cri.go:89] found id: ""
	I0719 15:50:11.986534   58817 logs.go:276] 0 containers: []
	W0719 15:50:11.986545   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:11.986553   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:11.986610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:12.026171   58817 cri.go:89] found id: ""
	I0719 15:50:12.026221   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.026250   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:12.026260   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:12.026329   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:12.060990   58817 cri.go:89] found id: ""
	I0719 15:50:12.061018   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.061028   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:12.061036   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:12.061097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:12.098545   58817 cri.go:89] found id: ""
	I0719 15:50:12.098573   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.098584   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:12.098591   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:12.098650   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:12.134949   58817 cri.go:89] found id: ""
	I0719 15:50:12.134978   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.134989   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:12.134996   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:12.135061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:12.171142   58817 cri.go:89] found id: ""
	I0719 15:50:12.171165   58817 logs.go:276] 0 containers: []
	W0719 15:50:12.171173   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:12.171181   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:12.171193   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:12.211496   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:12.211536   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:12.266024   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:12.266060   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:12.280951   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:12.280985   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:12.352245   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:12.352269   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:12.352280   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:09.813831   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.815120   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.815551   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.512872   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:13.514351   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:11.584717   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.082745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:14.929733   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:14.943732   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:14.943815   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:14.980506   58817 cri.go:89] found id: ""
	I0719 15:50:14.980529   58817 logs.go:276] 0 containers: []
	W0719 15:50:14.980539   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:14.980545   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:14.980590   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:15.015825   58817 cri.go:89] found id: ""
	I0719 15:50:15.015853   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.015863   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:15.015870   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:15.015937   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:15.054862   58817 cri.go:89] found id: ""
	I0719 15:50:15.054894   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.054905   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:15.054913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:15.054973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:15.092542   58817 cri.go:89] found id: ""
	I0719 15:50:15.092573   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.092590   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:15.092598   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:15.092663   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:15.127815   58817 cri.go:89] found id: ""
	I0719 15:50:15.127843   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.127853   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:15.127865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:15.127931   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:15.166423   58817 cri.go:89] found id: ""
	I0719 15:50:15.166446   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.166453   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:15.166459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:15.166517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.199240   58817 cri.go:89] found id: ""
	I0719 15:50:15.199268   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.199277   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:15.199283   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:15.199336   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:15.231927   58817 cri.go:89] found id: ""
	I0719 15:50:15.231957   58817 logs.go:276] 0 containers: []
	W0719 15:50:15.231966   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:15.231978   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:15.231994   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:15.284551   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:15.284586   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:15.299152   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:15.299181   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:15.374085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:15.374107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:15.374123   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:15.458103   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:15.458144   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:18.003862   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:18.019166   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:18.019215   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:18.053430   58817 cri.go:89] found id: ""
	I0719 15:50:18.053470   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.053482   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:18.053492   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:18.053565   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:18.091897   58817 cri.go:89] found id: ""
	I0719 15:50:18.091922   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.091931   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:18.091936   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:18.091997   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:18.127239   58817 cri.go:89] found id: ""
	I0719 15:50:18.127266   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.127277   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:18.127287   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:18.127346   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:18.163927   58817 cri.go:89] found id: ""
	I0719 15:50:18.163953   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.163965   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:18.163973   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:18.164032   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:18.199985   58817 cri.go:89] found id: ""
	I0719 15:50:18.200015   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.200027   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:18.200034   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:18.200096   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:18.234576   58817 cri.go:89] found id: ""
	I0719 15:50:18.234603   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.234614   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:18.234625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:18.234686   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:15.815701   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:17.816052   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.012834   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.014504   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:16.582156   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.583011   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:18.270493   58817 cri.go:89] found id: ""
	I0719 15:50:18.270516   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.270526   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:18.270532   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:18.270588   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:18.306779   58817 cri.go:89] found id: ""
	I0719 15:50:18.306813   58817 logs.go:276] 0 containers: []
	W0719 15:50:18.306821   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:18.306832   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:18.306850   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:18.375782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:18.375814   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:18.390595   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:18.390630   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:18.459204   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:18.459227   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:18.459243   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:18.540667   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:18.540724   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.084736   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:21.099416   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:21.099495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:21.133193   58817 cri.go:89] found id: ""
	I0719 15:50:21.133216   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.133224   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:21.133231   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:21.133309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:21.174649   58817 cri.go:89] found id: ""
	I0719 15:50:21.174679   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.174689   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:21.174697   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:21.174757   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:21.208279   58817 cri.go:89] found id: ""
	I0719 15:50:21.208309   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.208319   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:21.208325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:21.208386   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:21.242199   58817 cri.go:89] found id: ""
	I0719 15:50:21.242222   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.242229   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:21.242247   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:21.242301   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:21.278018   58817 cri.go:89] found id: ""
	I0719 15:50:21.278050   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.278059   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:21.278069   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:21.278125   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:21.314397   58817 cri.go:89] found id: ""
	I0719 15:50:21.314419   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.314427   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:21.314435   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:21.314490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:21.349041   58817 cri.go:89] found id: ""
	I0719 15:50:21.349067   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.349075   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:21.349080   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:21.349129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:21.387325   58817 cri.go:89] found id: ""
	I0719 15:50:21.387353   58817 logs.go:276] 0 containers: []
	W0719 15:50:21.387361   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:21.387369   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:21.387384   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:21.401150   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:21.401177   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:21.465784   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:21.465810   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:21.465821   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:21.545965   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:21.545998   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:21.584054   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:21.584081   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:20.312912   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:22.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:20.513572   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.014103   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:21.082689   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:23.583483   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:24.139199   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:24.152485   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:24.152552   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:24.186387   58817 cri.go:89] found id: ""
	I0719 15:50:24.186417   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.186427   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:24.186435   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:24.186494   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:24.226061   58817 cri.go:89] found id: ""
	I0719 15:50:24.226093   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.226103   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:24.226111   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:24.226168   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:24.265542   58817 cri.go:89] found id: ""
	I0719 15:50:24.265566   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.265574   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:24.265579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:24.265630   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:24.300277   58817 cri.go:89] found id: ""
	I0719 15:50:24.300308   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.300318   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:24.300325   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:24.300378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:24.340163   58817 cri.go:89] found id: ""
	I0719 15:50:24.340192   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.340203   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:24.340211   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:24.340270   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:24.375841   58817 cri.go:89] found id: ""
	I0719 15:50:24.375863   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.375873   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:24.375881   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:24.375941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:24.413528   58817 cri.go:89] found id: ""
	I0719 15:50:24.413558   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.413569   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:24.413577   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:24.413641   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:24.451101   58817 cri.go:89] found id: ""
	I0719 15:50:24.451129   58817 logs.go:276] 0 containers: []
	W0719 15:50:24.451139   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:24.451148   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:24.451163   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:24.491150   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:24.491178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:24.544403   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:24.544436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:24.560376   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:24.560407   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:24.633061   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:24.633081   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:24.633097   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.214261   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:27.227642   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:27.227724   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:27.263805   58817 cri.go:89] found id: ""
	I0719 15:50:27.263838   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.263851   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:27.263859   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:27.263941   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:27.299817   58817 cri.go:89] found id: ""
	I0719 15:50:27.299860   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.299872   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:27.299879   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:27.299947   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:27.339924   58817 cri.go:89] found id: ""
	I0719 15:50:27.339953   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.339963   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:27.339971   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:27.340036   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:27.375850   58817 cri.go:89] found id: ""
	I0719 15:50:27.375877   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.375885   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:27.375891   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:27.375940   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:27.410395   58817 cri.go:89] found id: ""
	I0719 15:50:27.410420   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.410429   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:27.410437   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:27.410498   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:27.444124   58817 cri.go:89] found id: ""
	I0719 15:50:27.444154   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.444162   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:27.444167   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:27.444230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:27.478162   58817 cri.go:89] found id: ""
	I0719 15:50:27.478191   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.478202   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:27.478210   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:27.478285   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:27.514901   58817 cri.go:89] found id: ""
	I0719 15:50:27.514939   58817 logs.go:276] 0 containers: []
	W0719 15:50:27.514949   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:27.514959   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:27.514973   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:27.591783   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:27.591815   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:27.629389   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:27.629431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:27.684318   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:27.684351   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:27.698415   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:27.698441   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:27.770032   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:25.312127   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.312599   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.512955   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:27.515102   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:25.583597   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:28.083843   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.270332   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:30.284645   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:30.284716   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:30.324096   58817 cri.go:89] found id: ""
	I0719 15:50:30.324120   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.324128   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:30.324133   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:30.324181   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:30.362682   58817 cri.go:89] found id: ""
	I0719 15:50:30.362749   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.362769   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:30.362777   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:30.362848   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:30.400797   58817 cri.go:89] found id: ""
	I0719 15:50:30.400829   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.400840   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:30.400847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:30.400910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:30.438441   58817 cri.go:89] found id: ""
	I0719 15:50:30.438471   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.438482   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:30.438490   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:30.438556   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:30.481525   58817 cri.go:89] found id: ""
	I0719 15:50:30.481555   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.481567   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:30.481581   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:30.481643   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:30.527384   58817 cri.go:89] found id: ""
	I0719 15:50:30.527416   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.527426   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:30.527434   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:30.527495   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:30.591502   58817 cri.go:89] found id: ""
	I0719 15:50:30.591530   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.591540   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:30.591548   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:30.591603   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:30.627271   58817 cri.go:89] found id: ""
	I0719 15:50:30.627298   58817 logs.go:276] 0 containers: []
	W0719 15:50:30.627306   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:30.627315   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:30.627326   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:30.680411   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:30.680463   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:30.694309   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:30.694344   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:30.771740   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:30.771776   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:30.771794   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:30.857591   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:30.857625   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:29.815683   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.312009   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.312309   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.013332   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:32.013381   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:30.583436   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.082937   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:33.407376   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:33.421602   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:33.421680   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:33.458608   58817 cri.go:89] found id: ""
	I0719 15:50:33.458640   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.458650   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:33.458658   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:33.458720   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:33.494250   58817 cri.go:89] found id: ""
	I0719 15:50:33.494279   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.494290   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:33.494298   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:33.494363   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:33.534768   58817 cri.go:89] found id: ""
	I0719 15:50:33.534793   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.534804   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:33.534811   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:33.534876   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:33.569912   58817 cri.go:89] found id: ""
	I0719 15:50:33.569942   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.569950   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:33.569955   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:33.570010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:33.605462   58817 cri.go:89] found id: ""
	I0719 15:50:33.605486   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.605496   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:33.605503   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:33.605569   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:33.649091   58817 cri.go:89] found id: ""
	I0719 15:50:33.649121   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.649129   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:33.649134   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:33.649184   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:33.682056   58817 cri.go:89] found id: ""
	I0719 15:50:33.682084   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.682092   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:33.682097   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:33.682145   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:33.717454   58817 cri.go:89] found id: ""
	I0719 15:50:33.717483   58817 logs.go:276] 0 containers: []
	W0719 15:50:33.717492   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:33.717501   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:33.717513   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:33.770793   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:33.770828   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:33.784549   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:33.784583   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:33.860831   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:33.860851   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:33.860862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:33.936003   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:33.936037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.476206   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:36.489032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:36.489090   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:36.525070   58817 cri.go:89] found id: ""
	I0719 15:50:36.525098   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.525108   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:36.525116   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:36.525171   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:36.560278   58817 cri.go:89] found id: ""
	I0719 15:50:36.560301   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.560309   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:36.560315   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:36.560367   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:36.595594   58817 cri.go:89] found id: ""
	I0719 15:50:36.595620   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.595630   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:36.595637   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:36.595696   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:36.631403   58817 cri.go:89] found id: ""
	I0719 15:50:36.631434   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.631442   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:36.631447   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:36.631502   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:36.671387   58817 cri.go:89] found id: ""
	I0719 15:50:36.671413   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.671424   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:36.671431   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:36.671492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:36.705473   58817 cri.go:89] found id: ""
	I0719 15:50:36.705500   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.705507   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:36.705514   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:36.705559   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:36.741077   58817 cri.go:89] found id: ""
	I0719 15:50:36.741110   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.741126   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:36.741133   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:36.741195   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:36.781987   58817 cri.go:89] found id: ""
	I0719 15:50:36.782016   58817 logs.go:276] 0 containers: []
	W0719 15:50:36.782025   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:36.782036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:36.782051   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:36.795107   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:36.795138   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:36.869034   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:36.869056   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:36.869070   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:36.946172   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:36.946207   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:36.983497   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:36.983535   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:36.812745   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.312184   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:34.513321   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:36.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.012035   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:35.084310   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:37.583482   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:39.537658   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:39.551682   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:39.551756   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:39.588176   58817 cri.go:89] found id: ""
	I0719 15:50:39.588199   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.588206   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:39.588212   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:39.588255   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:39.623202   58817 cri.go:89] found id: ""
	I0719 15:50:39.623235   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.623245   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:39.623265   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:39.623317   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:39.658601   58817 cri.go:89] found id: ""
	I0719 15:50:39.658634   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.658646   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:39.658653   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:39.658712   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:39.694820   58817 cri.go:89] found id: ""
	I0719 15:50:39.694842   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.694852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:39.694859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:39.694922   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:39.734296   58817 cri.go:89] found id: ""
	I0719 15:50:39.734325   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.734333   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:39.734339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:39.734393   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:39.773416   58817 cri.go:89] found id: ""
	I0719 15:50:39.773506   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.773527   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:39.773538   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:39.773614   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:39.812265   58817 cri.go:89] found id: ""
	I0719 15:50:39.812293   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.812303   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:39.812311   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:39.812366   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:39.849148   58817 cri.go:89] found id: ""
	I0719 15:50:39.849177   58817 logs.go:276] 0 containers: []
	W0719 15:50:39.849188   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:39.849199   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:39.849213   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:39.900254   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:39.900285   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:39.913997   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:39.914025   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:39.986937   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:39.986963   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:39.986982   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:40.071967   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:40.072009   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:42.612170   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:42.625741   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:42.625824   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:42.662199   58817 cri.go:89] found id: ""
	I0719 15:50:42.662230   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.662253   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:42.662261   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:42.662314   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:42.702346   58817 cri.go:89] found id: ""
	I0719 15:50:42.702374   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.702387   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:42.702394   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:42.702454   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:42.743446   58817 cri.go:89] found id: ""
	I0719 15:50:42.743475   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.743488   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:42.743495   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:42.743555   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:42.783820   58817 cri.go:89] found id: ""
	I0719 15:50:42.783844   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.783852   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:42.783858   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:42.783917   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:42.821375   58817 cri.go:89] found id: ""
	I0719 15:50:42.821403   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.821414   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:42.821421   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:42.821484   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:42.856010   58817 cri.go:89] found id: ""
	I0719 15:50:42.856037   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.856045   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:42.856051   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:42.856097   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:42.895867   58817 cri.go:89] found id: ""
	I0719 15:50:42.895894   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.895902   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:42.895908   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:42.895955   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:42.933077   58817 cri.go:89] found id: ""
	I0719 15:50:42.933106   58817 logs.go:276] 0 containers: []
	W0719 15:50:42.933114   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:42.933123   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:42.933135   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:42.984103   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:42.984142   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:42.998043   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:42.998075   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:43.069188   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:43.069210   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:43.069222   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:43.148933   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:43.148991   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:41.313263   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.816257   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:41.014458   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:43.017012   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:40.083591   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:42.582246   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:44.582857   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.687007   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:45.701019   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:45.701099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:45.737934   58817 cri.go:89] found id: ""
	I0719 15:50:45.737960   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.737970   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:45.737978   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:45.738037   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:45.774401   58817 cri.go:89] found id: ""
	I0719 15:50:45.774428   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.774438   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:45.774447   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:45.774503   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:45.814507   58817 cri.go:89] found id: ""
	I0719 15:50:45.814533   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.814544   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:45.814551   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:45.814610   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:45.855827   58817 cri.go:89] found id: ""
	I0719 15:50:45.855852   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.855870   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:45.855877   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:45.855928   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:45.898168   58817 cri.go:89] found id: ""
	I0719 15:50:45.898196   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.898204   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:45.898209   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:45.898281   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:45.933402   58817 cri.go:89] found id: ""
	I0719 15:50:45.933433   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.933449   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:45.933468   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:45.933525   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:45.971415   58817 cri.go:89] found id: ""
	I0719 15:50:45.971443   58817 logs.go:276] 0 containers: []
	W0719 15:50:45.971451   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:45.971457   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:45.971508   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:46.006700   58817 cri.go:89] found id: ""
	I0719 15:50:46.006729   58817 logs.go:276] 0 containers: []
	W0719 15:50:46.006739   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:46.006750   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:46.006764   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:46.083885   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:46.083925   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:46.122277   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:46.122308   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:46.172907   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:46.172940   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:46.186365   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:46.186392   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:46.263803   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:46.312320   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.312805   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:45.512849   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.013822   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:46.582906   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.583537   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:48.764336   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:48.778927   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:48.779002   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:48.816538   58817 cri.go:89] found id: ""
	I0719 15:50:48.816566   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.816576   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:48.816589   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:48.816657   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:48.852881   58817 cri.go:89] found id: ""
	I0719 15:50:48.852904   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.852912   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:48.852925   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:48.852987   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:48.886156   58817 cri.go:89] found id: ""
	I0719 15:50:48.886187   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.886196   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:48.886202   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:48.886271   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:48.922221   58817 cri.go:89] found id: ""
	I0719 15:50:48.922270   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.922281   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:48.922289   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:48.922350   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:48.957707   58817 cri.go:89] found id: ""
	I0719 15:50:48.957735   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.957743   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:48.957750   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:48.957797   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:48.994635   58817 cri.go:89] found id: ""
	I0719 15:50:48.994667   58817 logs.go:276] 0 containers: []
	W0719 15:50:48.994679   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:48.994687   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:48.994747   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:49.028849   58817 cri.go:89] found id: ""
	I0719 15:50:49.028873   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.028881   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:49.028886   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:49.028933   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:49.063835   58817 cri.go:89] found id: ""
	I0719 15:50:49.063865   58817 logs.go:276] 0 containers: []
	W0719 15:50:49.063875   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:49.063885   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:49.063900   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:49.144709   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:49.144751   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:49.184783   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:49.184819   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:49.237005   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:49.237037   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:49.250568   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:49.250595   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:49.319473   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:51.820132   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:51.833230   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:51.833298   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:51.870393   58817 cri.go:89] found id: ""
	I0719 15:50:51.870424   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.870435   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:51.870442   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:51.870496   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:51.906094   58817 cri.go:89] found id: ""
	I0719 15:50:51.906119   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.906132   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:51.906139   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:51.906192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:51.941212   58817 cri.go:89] found id: ""
	I0719 15:50:51.941236   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.941244   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:51.941257   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:51.941300   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:51.973902   58817 cri.go:89] found id: ""
	I0719 15:50:51.973925   58817 logs.go:276] 0 containers: []
	W0719 15:50:51.973933   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:51.973938   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:51.973983   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:52.010449   58817 cri.go:89] found id: ""
	I0719 15:50:52.010476   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.010486   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:52.010493   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:52.010551   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:52.047317   58817 cri.go:89] found id: ""
	I0719 15:50:52.047343   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.047353   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:52.047360   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:52.047405   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:52.081828   58817 cri.go:89] found id: ""
	I0719 15:50:52.081859   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.081868   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:52.081875   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:52.081946   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:52.119128   58817 cri.go:89] found id: ""
	I0719 15:50:52.119156   58817 logs.go:276] 0 containers: []
	W0719 15:50:52.119164   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:52.119172   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:52.119185   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:52.132928   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:52.132955   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:52.203075   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:52.203099   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:52.203114   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:52.278743   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:52.278781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:52.325456   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:52.325492   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:50.815488   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.312626   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:50.013996   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:52.514493   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:51.082358   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:53.582566   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:54.879243   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:54.894078   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:54.894147   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:54.931463   58817 cri.go:89] found id: ""
	I0719 15:50:54.931496   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.931507   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:54.931514   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:54.931585   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:54.968803   58817 cri.go:89] found id: ""
	I0719 15:50:54.968831   58817 logs.go:276] 0 containers: []
	W0719 15:50:54.968840   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:54.968847   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:54.968911   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:55.005621   58817 cri.go:89] found id: ""
	I0719 15:50:55.005646   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.005657   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:55.005664   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:55.005733   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:55.040271   58817 cri.go:89] found id: ""
	I0719 15:50:55.040292   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.040299   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:55.040305   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:55.040349   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:55.072693   58817 cri.go:89] found id: ""
	I0719 15:50:55.072714   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.072722   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:55.072728   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:55.072779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:55.111346   58817 cri.go:89] found id: ""
	I0719 15:50:55.111373   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.111381   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:55.111386   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:55.111430   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:55.149358   58817 cri.go:89] found id: ""
	I0719 15:50:55.149385   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.149395   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:55.149402   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:55.149459   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:55.183807   58817 cri.go:89] found id: ""
	I0719 15:50:55.183834   58817 logs.go:276] 0 containers: []
	W0719 15:50:55.183845   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:55.183856   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:55.183870   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:55.234128   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:55.234157   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:50:55.247947   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:55.247971   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:50:55.317405   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:55.317425   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:55.317436   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:55.398613   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:55.398649   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:57.945601   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:50:57.960139   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:50:57.960193   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:50:58.000436   58817 cri.go:89] found id: ""
	I0719 15:50:58.000462   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.000469   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:50:58.000476   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:50:58.000522   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:50:58.041437   58817 cri.go:89] found id: ""
	I0719 15:50:58.041463   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.041472   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:50:58.041477   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:50:58.041539   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:50:58.077280   58817 cri.go:89] found id: ""
	I0719 15:50:58.077303   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.077311   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:50:58.077317   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:50:58.077373   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:50:58.111992   58817 cri.go:89] found id: ""
	I0719 15:50:58.112019   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.112026   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:50:58.112032   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:50:58.112107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:50:58.146582   58817 cri.go:89] found id: ""
	I0719 15:50:58.146610   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.146620   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:50:58.146625   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:50:58.146669   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:50:58.182159   58817 cri.go:89] found id: ""
	I0719 15:50:58.182187   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.182196   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:50:58.182204   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:50:58.182279   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:50:58.215804   58817 cri.go:89] found id: ""
	I0719 15:50:58.215834   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.215844   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:50:58.215852   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:50:58.215913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:50:58.249366   58817 cri.go:89] found id: ""
	I0719 15:50:58.249392   58817 logs.go:276] 0 containers: []
	W0719 15:50:58.249402   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:50:58.249413   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:50:58.249430   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:50:55.814460   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.313739   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:55.014039   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:57.513248   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:56.082876   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:50:58.583172   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	W0719 15:50:58.324510   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:50:58.324536   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:50:58.324550   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:50:58.406320   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:50:58.406353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:50:58.449820   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:50:58.449854   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:50:58.502245   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:50:58.502281   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:01.018374   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:01.032683   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:01.032753   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:01.071867   58817 cri.go:89] found id: ""
	I0719 15:51:01.071898   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.071910   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:01.071917   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:01.071982   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:01.108227   58817 cri.go:89] found id: ""
	I0719 15:51:01.108251   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.108259   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:01.108264   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:01.108309   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:01.143029   58817 cri.go:89] found id: ""
	I0719 15:51:01.143064   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.143076   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:01.143083   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:01.143154   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:01.178871   58817 cri.go:89] found id: ""
	I0719 15:51:01.178901   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.178911   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:01.178919   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:01.178974   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:01.216476   58817 cri.go:89] found id: ""
	I0719 15:51:01.216507   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.216518   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:01.216526   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:01.216584   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:01.254534   58817 cri.go:89] found id: ""
	I0719 15:51:01.254557   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.254565   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:01.254572   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:01.254617   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:01.293156   58817 cri.go:89] found id: ""
	I0719 15:51:01.293187   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.293198   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:01.293212   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:01.293278   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:01.328509   58817 cri.go:89] found id: ""
	I0719 15:51:01.328538   58817 logs.go:276] 0 containers: []
	W0719 15:51:01.328549   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:01.328560   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:01.328574   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:01.399659   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:01.399678   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:01.399693   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:01.476954   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:01.476993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:01.519513   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:01.519539   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:01.571976   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:01.572015   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:00.812445   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.813629   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.011751   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:02.013062   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.013473   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:00.584028   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:03.082149   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:04.088726   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:04.102579   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:04.102642   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:04.141850   58817 cri.go:89] found id: ""
	I0719 15:51:04.141888   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.141899   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:04.141907   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:04.141988   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:04.177821   58817 cri.go:89] found id: ""
	I0719 15:51:04.177846   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.177854   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:04.177859   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:04.177914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:04.212905   58817 cri.go:89] found id: ""
	I0719 15:51:04.212935   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.212945   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:04.212951   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:04.213012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:04.249724   58817 cri.go:89] found id: ""
	I0719 15:51:04.249762   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.249773   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:04.249781   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:04.249843   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:04.285373   58817 cri.go:89] found id: ""
	I0719 15:51:04.285407   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.285418   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:04.285430   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:04.285490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:04.348842   58817 cri.go:89] found id: ""
	I0719 15:51:04.348878   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.348888   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:04.348895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:04.348963   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:04.384420   58817 cri.go:89] found id: ""
	I0719 15:51:04.384448   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.384459   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:04.384466   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:04.384533   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:04.420716   58817 cri.go:89] found id: ""
	I0719 15:51:04.420746   58817 logs.go:276] 0 containers: []
	W0719 15:51:04.420754   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:04.420763   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:04.420775   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:04.472986   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:04.473027   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:04.488911   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:04.488938   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:04.563103   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:04.563125   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:04.563139   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:04.640110   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:04.640151   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.183190   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:07.196605   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:07.196667   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:07.234974   58817 cri.go:89] found id: ""
	I0719 15:51:07.235002   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.235010   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:07.235016   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:07.235066   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:07.269045   58817 cri.go:89] found id: ""
	I0719 15:51:07.269078   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.269089   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:07.269096   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:07.269156   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:07.308866   58817 cri.go:89] found id: ""
	I0719 15:51:07.308897   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.308907   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:07.308914   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:07.308973   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:07.344406   58817 cri.go:89] found id: ""
	I0719 15:51:07.344440   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.344451   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:07.344459   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:07.344517   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:07.379914   58817 cri.go:89] found id: ""
	I0719 15:51:07.379948   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.379956   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:07.379962   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:07.380010   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:07.420884   58817 cri.go:89] found id: ""
	I0719 15:51:07.420923   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.420934   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:07.420942   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:07.421012   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:07.455012   58817 cri.go:89] found id: ""
	I0719 15:51:07.455041   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.455071   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:07.455082   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:07.455151   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:07.492321   58817 cri.go:89] found id: ""
	I0719 15:51:07.492346   58817 logs.go:276] 0 containers: []
	W0719 15:51:07.492354   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:07.492362   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:07.492374   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:07.506377   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:07.506408   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:07.578895   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:07.578928   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:07.578943   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:07.662333   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:07.662373   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:07.701823   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:07.701856   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:05.312865   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.816945   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:06.513634   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.012283   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:05.084185   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:07.583429   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:09.583944   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:10.256610   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:10.270156   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:10.270225   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:10.311318   58817 cri.go:89] found id: ""
	I0719 15:51:10.311347   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.311357   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:10.311365   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:10.311422   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:10.347145   58817 cri.go:89] found id: ""
	I0719 15:51:10.347174   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.347183   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:10.347189   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:10.347243   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:10.381626   58817 cri.go:89] found id: ""
	I0719 15:51:10.381659   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.381672   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:10.381680   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:10.381750   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:10.417077   58817 cri.go:89] found id: ""
	I0719 15:51:10.417103   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.417111   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:10.417117   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:10.417174   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:10.454094   58817 cri.go:89] found id: ""
	I0719 15:51:10.454123   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.454131   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:10.454137   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:10.454185   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:10.489713   58817 cri.go:89] found id: ""
	I0719 15:51:10.489739   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.489747   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:10.489753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:10.489799   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:10.524700   58817 cri.go:89] found id: ""
	I0719 15:51:10.524737   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.524745   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:10.524753   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:10.524810   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:10.564249   58817 cri.go:89] found id: ""
	I0719 15:51:10.564277   58817 logs.go:276] 0 containers: []
	W0719 15:51:10.564285   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:10.564293   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:10.564309   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:10.618563   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:10.618599   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:10.633032   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:10.633058   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:10.706504   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:10.706530   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:10.706546   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:10.800542   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:10.800581   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:10.315941   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:12.812732   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.013749   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.513338   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:11.584335   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:14.083745   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:13.357761   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:13.371415   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:13.371492   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:13.406666   58817 cri.go:89] found id: ""
	I0719 15:51:13.406695   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.406705   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:13.406713   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:13.406773   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:13.448125   58817 cri.go:89] found id: ""
	I0719 15:51:13.448153   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.448164   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:13.448171   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:13.448233   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:13.483281   58817 cri.go:89] found id: ""
	I0719 15:51:13.483306   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.483315   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:13.483323   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:13.483384   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:13.522499   58817 cri.go:89] found id: ""
	I0719 15:51:13.522527   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.522538   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:13.522545   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:13.522605   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:13.560011   58817 cri.go:89] found id: ""
	I0719 15:51:13.560038   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.560049   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:13.560056   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:13.560115   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:13.596777   58817 cri.go:89] found id: ""
	I0719 15:51:13.596812   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.596824   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:13.596832   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:13.596883   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:13.633765   58817 cri.go:89] found id: ""
	I0719 15:51:13.633790   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.633798   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:13.633804   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:13.633857   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:13.670129   58817 cri.go:89] found id: ""
	I0719 15:51:13.670151   58817 logs.go:276] 0 containers: []
	W0719 15:51:13.670160   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:13.670168   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:13.670179   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:13.745337   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:13.745363   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:13.745375   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:13.827800   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:13.827831   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:13.871659   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:13.871695   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:13.925445   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:13.925478   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.439455   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:16.454414   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:16.454485   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:16.494962   58817 cri.go:89] found id: ""
	I0719 15:51:16.494987   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.494997   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:16.495004   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:16.495048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:16.540948   58817 cri.go:89] found id: ""
	I0719 15:51:16.540978   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.540986   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:16.540992   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:16.541052   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:16.588886   58817 cri.go:89] found id: ""
	I0719 15:51:16.588916   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.588926   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:16.588933   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:16.588990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:16.649174   58817 cri.go:89] found id: ""
	I0719 15:51:16.649198   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.649207   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:16.649214   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:16.649260   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:16.688759   58817 cri.go:89] found id: ""
	I0719 15:51:16.688787   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.688794   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:16.688800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:16.688860   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:16.724730   58817 cri.go:89] found id: ""
	I0719 15:51:16.724759   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.724767   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:16.724773   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:16.724831   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:16.762972   58817 cri.go:89] found id: ""
	I0719 15:51:16.762995   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.763002   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:16.763007   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:16.763058   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:16.798054   58817 cri.go:89] found id: ""
	I0719 15:51:16.798080   58817 logs.go:276] 0 containers: []
	W0719 15:51:16.798088   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:16.798096   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:16.798107   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:16.887495   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:16.887533   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:16.929384   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:16.929412   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:16.978331   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:16.978362   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:16.991663   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:16.991687   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:17.064706   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:15.311404   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:17.312317   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.013193   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:18.014317   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:16.583403   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.082807   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:19.565881   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:19.579476   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:19.579536   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:19.614551   58817 cri.go:89] found id: ""
	I0719 15:51:19.614576   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.614586   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:19.614595   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:19.614655   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:19.657984   58817 cri.go:89] found id: ""
	I0719 15:51:19.658012   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.658023   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:19.658030   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:19.658098   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:19.692759   58817 cri.go:89] found id: ""
	I0719 15:51:19.692785   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.692793   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:19.692800   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:19.692855   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:19.726119   58817 cri.go:89] found id: ""
	I0719 15:51:19.726148   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.726158   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:19.726174   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:19.726230   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:19.763348   58817 cri.go:89] found id: ""
	I0719 15:51:19.763372   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.763379   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:19.763385   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:19.763439   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:19.796880   58817 cri.go:89] found id: ""
	I0719 15:51:19.796909   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.796923   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:19.796929   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:19.796977   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:19.831819   58817 cri.go:89] found id: ""
	I0719 15:51:19.831845   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.831853   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:19.831859   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:19.831913   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:19.866787   58817 cri.go:89] found id: ""
	I0719 15:51:19.866814   58817 logs.go:276] 0 containers: []
	W0719 15:51:19.866825   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:19.866835   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:19.866848   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:19.914087   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:19.914120   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.927236   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:19.927260   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:19.995619   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:19.995643   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:19.995658   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:20.084355   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:20.084385   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:22.623263   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:22.637745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:22.637818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:22.678276   58817 cri.go:89] found id: ""
	I0719 15:51:22.678305   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.678317   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:22.678325   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:22.678378   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:22.716710   58817 cri.go:89] found id: ""
	I0719 15:51:22.716736   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.716753   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:22.716761   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:22.716828   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:22.754965   58817 cri.go:89] found id: ""
	I0719 15:51:22.754993   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.755002   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:22.755008   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:22.755054   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:22.788474   58817 cri.go:89] found id: ""
	I0719 15:51:22.788508   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.788519   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:22.788527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:22.788586   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:22.823838   58817 cri.go:89] found id: ""
	I0719 15:51:22.823872   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.823882   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:22.823889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:22.823950   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:22.863086   58817 cri.go:89] found id: ""
	I0719 15:51:22.863127   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.863138   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:22.863146   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:22.863211   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:22.899292   58817 cri.go:89] found id: ""
	I0719 15:51:22.899321   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.899331   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:22.899339   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:22.899403   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:22.932292   58817 cri.go:89] found id: ""
	I0719 15:51:22.932318   58817 logs.go:276] 0 containers: []
	W0719 15:51:22.932328   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:22.932338   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:22.932353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:23.003438   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:23.003460   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:23.003477   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:23.088349   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:23.088391   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:23.132169   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:23.132194   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:23.184036   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:23.184069   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:19.812659   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.813178   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.311781   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:20.512610   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:22.512707   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:21.083030   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:23.583501   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.698493   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:25.712199   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:25.712267   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:25.750330   58817 cri.go:89] found id: ""
	I0719 15:51:25.750358   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.750368   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:25.750375   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:25.750434   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:25.784747   58817 cri.go:89] found id: ""
	I0719 15:51:25.784777   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.784788   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:25.784794   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:25.784853   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:25.821272   58817 cri.go:89] found id: ""
	I0719 15:51:25.821297   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.821308   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:25.821315   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:25.821370   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:25.858697   58817 cri.go:89] found id: ""
	I0719 15:51:25.858723   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.858732   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:25.858737   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:25.858782   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:25.901706   58817 cri.go:89] found id: ""
	I0719 15:51:25.901738   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.901749   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:25.901757   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:25.901818   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:25.943073   58817 cri.go:89] found id: ""
	I0719 15:51:25.943103   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.943115   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:25.943122   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:25.943190   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:25.982707   58817 cri.go:89] found id: ""
	I0719 15:51:25.982731   58817 logs.go:276] 0 containers: []
	W0719 15:51:25.982739   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:25.982745   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:25.982791   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:26.023419   58817 cri.go:89] found id: ""
	I0719 15:51:26.023442   58817 logs.go:276] 0 containers: []
	W0719 15:51:26.023449   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:26.023456   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:26.023468   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:26.103842   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:26.103875   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:26.143567   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:26.143594   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:26.199821   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:26.199862   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:26.214829   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:26.214865   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:26.287368   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:26.312416   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.313406   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:24.513171   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:27.012377   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:29.014890   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:25.583785   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.083633   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:28.788202   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:28.801609   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:28.801676   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:28.834911   58817 cri.go:89] found id: ""
	I0719 15:51:28.834937   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.834947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:28.834955   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:28.835013   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:28.868219   58817 cri.go:89] found id: ""
	I0719 15:51:28.868242   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.868250   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:28.868256   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:28.868315   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:28.904034   58817 cri.go:89] found id: ""
	I0719 15:51:28.904055   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.904063   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:28.904068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:28.904121   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:28.941019   58817 cri.go:89] found id: ""
	I0719 15:51:28.941051   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.941061   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:28.941068   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:28.941129   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:28.976309   58817 cri.go:89] found id: ""
	I0719 15:51:28.976335   58817 logs.go:276] 0 containers: []
	W0719 15:51:28.976346   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:28.976352   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:28.976410   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:29.011340   58817 cri.go:89] found id: ""
	I0719 15:51:29.011368   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.011378   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:29.011388   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:29.011447   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:29.044356   58817 cri.go:89] found id: ""
	I0719 15:51:29.044378   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.044385   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:29.044390   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:29.044438   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:29.080883   58817 cri.go:89] found id: ""
	I0719 15:51:29.080910   58817 logs.go:276] 0 containers: []
	W0719 15:51:29.080919   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:29.080929   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:29.080941   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:29.160266   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:29.160303   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:29.198221   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:29.198267   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:29.249058   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:29.249088   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:29.262711   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:29.262740   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:29.335654   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:31.836354   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:31.851895   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:31.851957   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:31.887001   58817 cri.go:89] found id: ""
	I0719 15:51:31.887036   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.887052   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:31.887058   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:31.887107   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:31.922102   58817 cri.go:89] found id: ""
	I0719 15:51:31.922132   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.922140   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:31.922145   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:31.922196   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:31.960183   58817 cri.go:89] found id: ""
	I0719 15:51:31.960208   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.960215   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:31.960221   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:31.960263   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:31.994822   58817 cri.go:89] found id: ""
	I0719 15:51:31.994849   58817 logs.go:276] 0 containers: []
	W0719 15:51:31.994859   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:31.994865   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:31.994912   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:32.034110   58817 cri.go:89] found id: ""
	I0719 15:51:32.034136   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.034145   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:32.034151   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:32.034209   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:32.071808   58817 cri.go:89] found id: ""
	I0719 15:51:32.071834   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.071842   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:32.071847   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:32.071910   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:32.110784   58817 cri.go:89] found id: ""
	I0719 15:51:32.110810   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.110820   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:32.110828   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:32.110895   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:32.148052   58817 cri.go:89] found id: ""
	I0719 15:51:32.148086   58817 logs.go:276] 0 containers: []
	W0719 15:51:32.148097   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:32.148108   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:32.148124   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:32.198891   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:32.198926   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:32.212225   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:32.212251   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:32.288389   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:32.288412   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:32.288431   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:32.368196   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:32.368229   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:30.811822   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.813013   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:31.512155   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.012636   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:30.083916   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:32.582845   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.582945   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:34.911872   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:34.926689   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:34.926771   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:34.959953   58817 cri.go:89] found id: ""
	I0719 15:51:34.959982   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.959992   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:34.960000   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:34.960061   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:34.999177   58817 cri.go:89] found id: ""
	I0719 15:51:34.999206   58817 logs.go:276] 0 containers: []
	W0719 15:51:34.999216   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:34.999223   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:34.999283   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:35.036001   58817 cri.go:89] found id: ""
	I0719 15:51:35.036034   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.036045   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:35.036052   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:35.036099   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:35.070375   58817 cri.go:89] found id: ""
	I0719 15:51:35.070404   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.070415   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:35.070423   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:35.070483   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:35.106940   58817 cri.go:89] found id: ""
	I0719 15:51:35.106969   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.106979   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:35.106984   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:35.107031   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:35.151664   58817 cri.go:89] found id: ""
	I0719 15:51:35.151688   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.151695   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:35.151700   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:35.151748   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:35.187536   58817 cri.go:89] found id: ""
	I0719 15:51:35.187564   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.187578   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:35.187588   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:35.187662   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.222614   58817 cri.go:89] found id: ""
	I0719 15:51:35.222642   58817 logs.go:276] 0 containers: []
	W0719 15:51:35.222652   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:35.222662   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:35.222677   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:35.273782   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:35.273816   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:35.288147   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:35.288176   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:35.361085   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:35.361107   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:35.361118   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:35.443327   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:35.443358   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:37.994508   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:38.007709   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:38.007779   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:38.040910   58817 cri.go:89] found id: ""
	I0719 15:51:38.040940   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.040947   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:38.040954   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:38.040999   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:38.080009   58817 cri.go:89] found id: ""
	I0719 15:51:38.080039   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.080058   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:38.080066   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:38.080137   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:38.115997   58817 cri.go:89] found id: ""
	I0719 15:51:38.116018   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.116026   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:38.116031   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:38.116079   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:38.150951   58817 cri.go:89] found id: ""
	I0719 15:51:38.150973   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.150981   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:38.150987   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:38.151045   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:38.184903   58817 cri.go:89] found id: ""
	I0719 15:51:38.184938   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.184949   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:38.184956   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:38.185014   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:38.218099   58817 cri.go:89] found id: ""
	I0719 15:51:38.218123   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.218131   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:38.218138   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:38.218192   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:38.252965   58817 cri.go:89] found id: ""
	I0719 15:51:38.252990   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.252997   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:38.253003   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:38.253047   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:35.313638   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:37.813400   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.013415   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.513387   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:36.583140   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:39.084770   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:38.289710   58817 cri.go:89] found id: ""
	I0719 15:51:38.289739   58817 logs.go:276] 0 containers: []
	W0719 15:51:38.289749   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:38.289757   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:38.289770   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:38.340686   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:38.340715   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:38.354334   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:38.354357   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:38.424410   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:38.424438   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:38.424452   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:38.500744   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:38.500781   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.043436   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:41.056857   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:41.056914   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:41.093651   58817 cri.go:89] found id: ""
	I0719 15:51:41.093678   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.093688   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:41.093695   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:41.093749   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:41.129544   58817 cri.go:89] found id: ""
	I0719 15:51:41.129572   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.129580   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:41.129586   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:41.129646   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:41.163416   58817 cri.go:89] found id: ""
	I0719 15:51:41.163444   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.163457   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:41.163465   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:41.163520   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:41.199180   58817 cri.go:89] found id: ""
	I0719 15:51:41.199205   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.199212   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:41.199220   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:41.199274   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:41.233891   58817 cri.go:89] found id: ""
	I0719 15:51:41.233919   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.233929   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:41.233936   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:41.233990   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:41.270749   58817 cri.go:89] found id: ""
	I0719 15:51:41.270777   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.270788   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:41.270794   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:41.270841   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:41.308365   58817 cri.go:89] found id: ""
	I0719 15:51:41.308393   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.308402   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:41.308408   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:41.308462   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:41.344692   58817 cri.go:89] found id: ""
	I0719 15:51:41.344720   58817 logs.go:276] 0 containers: []
	W0719 15:51:41.344729   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:41.344738   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:41.344749   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:41.420009   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:41.420035   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:41.420052   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:41.503356   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:41.503397   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:41.543875   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:41.543905   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:41.595322   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:41.595353   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:40.312909   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:42.812703   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.011956   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:43.513117   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:41.584336   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.082447   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:44.110343   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:44.125297   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:44.125365   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:44.160356   58817 cri.go:89] found id: ""
	I0719 15:51:44.160387   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.160398   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:44.160405   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:44.160461   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:44.195025   58817 cri.go:89] found id: ""
	I0719 15:51:44.195055   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.195065   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:44.195073   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:44.195140   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:44.227871   58817 cri.go:89] found id: ""
	I0719 15:51:44.227907   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.227929   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:44.227937   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:44.228000   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:44.265270   58817 cri.go:89] found id: ""
	I0719 15:51:44.265296   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.265305   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:44.265312   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:44.265368   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:44.298714   58817 cri.go:89] found id: ""
	I0719 15:51:44.298744   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.298755   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:44.298762   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:44.298826   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:44.332638   58817 cri.go:89] found id: ""
	I0719 15:51:44.332665   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.332673   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:44.332679   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:44.332738   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:44.366871   58817 cri.go:89] found id: ""
	I0719 15:51:44.366897   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.366906   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:44.366913   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:44.366980   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:44.409353   58817 cri.go:89] found id: ""
	I0719 15:51:44.409381   58817 logs.go:276] 0 containers: []
	W0719 15:51:44.409392   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:44.409402   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:44.409417   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.446148   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:44.446178   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:44.497188   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:44.497217   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:44.511904   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:44.511935   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:44.577175   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:44.577193   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:44.577208   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.161809   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:47.175425   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:51:47.175490   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:51:47.213648   58817 cri.go:89] found id: ""
	I0719 15:51:47.213674   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.213681   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:51:47.213687   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:51:47.213737   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:51:47.249941   58817 cri.go:89] found id: ""
	I0719 15:51:47.249967   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.249979   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:51:47.249986   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:51:47.250041   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:51:47.284232   58817 cri.go:89] found id: ""
	I0719 15:51:47.284254   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.284261   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:51:47.284267   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:51:47.284318   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:51:47.321733   58817 cri.go:89] found id: ""
	I0719 15:51:47.321767   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.321778   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:51:47.321786   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:51:47.321844   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:51:47.358479   58817 cri.go:89] found id: ""
	I0719 15:51:47.358508   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.358520   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:51:47.358527   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:51:47.358582   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:51:47.390070   58817 cri.go:89] found id: ""
	I0719 15:51:47.390098   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.390108   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:51:47.390116   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:51:47.390176   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:51:47.429084   58817 cri.go:89] found id: ""
	I0719 15:51:47.429111   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.429118   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:51:47.429124   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:51:47.429179   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:51:47.469938   58817 cri.go:89] found id: ""
	I0719 15:51:47.469969   58817 logs.go:276] 0 containers: []
	W0719 15:51:47.469979   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:51:47.469991   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:51:47.470005   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:51:47.524080   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:51:47.524110   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:51:47.538963   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:51:47.538993   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:51:47.609107   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:51:47.609128   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:51:47.609143   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:51:47.691984   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:51:47.692028   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:51:44.813328   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:47.318119   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.013597   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.513037   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:46.083435   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:48.582222   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.234104   58817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:51:50.248706   58817 kubeadm.go:597] duration metric: took 4m2.874850727s to restartPrimaryControlPlane
	W0719 15:51:50.248802   58817 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:50.248827   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:50.712030   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:51:50.727328   58817 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:51:50.737545   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:51:50.748830   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:51:50.748855   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:51:50.748900   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:51:50.758501   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:51:50.758548   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:51:50.767877   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:51:50.777413   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:51:50.777477   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:51:50.787005   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.795917   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:51:50.795971   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:51:50.805058   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:51:50.814014   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:51:50.814069   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:51:50.823876   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:51:50.893204   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:51:50.893281   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:51:51.028479   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:51:51.028607   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:51:51.028698   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:51:51.212205   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:51:51.214199   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:51:51.214313   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:51:51.214423   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:51:51.214546   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:51:51.214625   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:51:51.214728   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:51:51.214813   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:51:51.214918   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:51:51.215011   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:51:51.215121   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:51:51.215231   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:51:51.215296   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:51:51.215381   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:51:51.275010   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:51:51.481366   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:51:51.685208   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:51:51.799007   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:51:51.820431   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:51:51.822171   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:51:51.822257   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:51:51.984066   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:51:51.986034   58817 out.go:204]   - Booting up control plane ...
	I0719 15:51:51.986137   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:51:51.988167   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:51:51.989122   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:51:51.989976   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:51:52.000879   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:51:49.811847   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:51.812747   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.312028   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.514497   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:53.012564   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:50.585244   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:52.587963   58417 pod_ready.go:102] pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:54.576923   58417 pod_ready.go:81] duration metric: took 4m0.000887015s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" ...
	E0719 15:51:54.576954   58417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-zwr8g" in "kube-system" namespace to be "Ready" (will not retry!)
	I0719 15:51:54.576979   58417 pod_ready.go:38] duration metric: took 4m10.045017696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:51:54.577013   58417 kubeadm.go:597] duration metric: took 4m18.572474217s to restartPrimaryControlPlane
	W0719 15:51:54.577075   58417 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 15:51:54.577107   58417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:51:56.314112   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:58.815297   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:55.012915   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:57.512491   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:01.312620   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:03.812880   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:51:59.512666   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:02.013784   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.314545   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:08.811891   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:04.512583   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:06.513519   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:09.016808   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:10.813197   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:13.313167   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:11.513329   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:14.012352   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:15.812105   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:17.812843   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:16.014362   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:18.513873   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:20.685347   58417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.108209289s)
	I0719 15:52:20.685431   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:20.699962   58417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:20.709728   58417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:20.719022   58417 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:20.719038   58417 kubeadm.go:157] found existing configuration files:
	
	I0719 15:52:20.719074   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:52:20.727669   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:52:20.727731   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:52:20.736851   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:52:20.745821   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:52:20.745867   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:52:20.755440   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.764307   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:52:20.764360   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:52:20.773759   58417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:52:20.782354   58417 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:52:20.782420   58417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:52:20.791186   58417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:20.837700   58417 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0719 15:52:20.837797   58417 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:52:20.958336   58417 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:20.958486   58417 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:20.958629   58417 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0719 15:52:20.967904   58417 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:20.969995   58417 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:20.970097   58417 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:52:20.970197   58417 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:20.970325   58417 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:52:20.970438   58417 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:52:20.970550   58417 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:52:20.970633   58417 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:52:20.970740   58417 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:52:20.970840   58417 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:52:20.970949   58417 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:52:20.971049   58417 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:52:20.971106   58417 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:52:20.971184   58417 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:21.175226   58417 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:21.355994   58417 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 15:52:21.453237   58417 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:21.569014   58417 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:21.672565   58417 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:21.673036   58417 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:21.675860   58417 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:20.312428   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:22.312770   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:24.314183   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.013099   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:23.512341   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:21.677594   58417 out.go:204]   - Booting up control plane ...
	I0719 15:52:21.677694   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:21.677787   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:21.677894   58417 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:21.695474   58417 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:21.701352   58417 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:21.701419   58417 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:52:21.831941   58417 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 15:52:21.832046   58417 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 15:52:22.333073   58417 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.399393ms
	I0719 15:52:22.333184   58417 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 15:52:27.336964   58417 kubeadm.go:310] [api-check] The API server is healthy after 5.002306078s
	I0719 15:52:27.348152   58417 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:27.366916   58417 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:27.396214   58417 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:27.396475   58417 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-382231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:27.408607   58417 kubeadm.go:310] [bootstrap-token] Using token: xdoy2n.29347ekmgral9ki3
	I0719 15:52:27.409857   58417 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:27.409991   58417 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:27.415553   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:27.424772   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:27.428421   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:27.439922   58417 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:27.443985   58417 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:27.742805   58417 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:28.253742   58417 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 15:52:28.744380   58417 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 15:52:28.744405   58417 kubeadm.go:310] 
	I0719 15:52:28.744486   58417 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:28.744498   58417 kubeadm.go:310] 
	I0719 15:52:28.744581   58417 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:28.744588   58417 kubeadm.go:310] 
	I0719 15:52:28.744633   58417 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 15:52:28.744704   58417 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:28.744783   58417 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:28.744794   58417 kubeadm.go:310] 
	I0719 15:52:28.744877   58417 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 15:52:28.744891   58417 kubeadm.go:310] 
	I0719 15:52:28.744944   58417 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:28.744951   58417 kubeadm.go:310] 
	I0719 15:52:28.744992   58417 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 15:52:28.745082   58417 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:28.745172   58417 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:28.745181   58417 kubeadm.go:310] 
	I0719 15:52:28.745253   58417 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:28.745319   58417 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 15:52:28.745332   58417 kubeadm.go:310] 
	I0719 15:52:28.745412   58417 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745499   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 \
	I0719 15:52:28.745518   58417 kubeadm.go:310] 	--control-plane 
	I0719 15:52:28.745525   58417 kubeadm.go:310] 
	I0719 15:52:28.745599   58417 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:28.745609   58417 kubeadm.go:310] 
	I0719 15:52:28.745677   58417 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xdoy2n.29347ekmgral9ki3 \
	I0719 15:52:28.745778   58417 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:107db513fdbabaa4d665297368efc858a861f3b63a12d95a32bdfdff33c73212 
	I0719 15:52:28.747435   58417 kubeadm.go:310] W0719 15:52:20.814208    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747697   58417 kubeadm.go:310] W0719 15:52:20.814905    2915 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0719 15:52:28.747795   58417 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:28.747815   58417 cni.go:84] Creating CNI manager for ""
	I0719 15:52:28.747827   58417 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 15:52:28.749619   58417 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:26.813409   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.814040   59208 pod_ready.go:102] pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:25.513048   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:27.514730   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:28.750992   58417 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:28.762976   58417 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 15:52:28.783894   58417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:28.783972   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.783989   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-382231 minikube.k8s.io/updated_at=2024_07_19T15_52_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=no-preload-382231 minikube.k8s.io/primary=true
	I0719 15:52:28.808368   58417 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:29.005658   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.505702   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.005765   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.505834   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.005837   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:31.506329   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.006419   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:32.505701   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.005735   58417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:33.130121   58417 kubeadm.go:1113] duration metric: took 4.346215264s to wait for elevateKubeSystemPrivileges
	I0719 15:52:33.130162   58417 kubeadm.go:394] duration metric: took 4m57.173876302s to StartCluster
	I0719 15:52:33.130187   58417 settings.go:142] acquiring lock: {Name:mkf161db99064622b5814f6906181f2f950ffafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.130290   58417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:52:33.131944   58417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/kubeconfig: {Name:mk3a7bf8d5a82f6ca0d75e0643009173ae572bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:33.132178   58417 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 15:52:33.132237   58417 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 15:52:33.132339   58417 addons.go:69] Setting storage-provisioner=true in profile "no-preload-382231"
	I0719 15:52:33.132358   58417 addons.go:69] Setting default-storageclass=true in profile "no-preload-382231"
	I0719 15:52:33.132381   58417 addons.go:234] Setting addon storage-provisioner=true in "no-preload-382231"
	I0719 15:52:33.132385   58417 config.go:182] Loaded profile config "no-preload-382231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0719 15:52:33.132391   58417 addons.go:243] addon storage-provisioner should already be in state true
	I0719 15:52:33.132392   58417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-382231"
	I0719 15:52:33.132419   58417 addons.go:69] Setting metrics-server=true in profile "no-preload-382231"
	I0719 15:52:33.132423   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132444   58417 addons.go:234] Setting addon metrics-server=true in "no-preload-382231"
	W0719 15:52:33.132452   58417 addons.go:243] addon metrics-server should already be in state true
	I0719 15:52:33.132474   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.132740   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132763   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132799   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132810   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.132822   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.132829   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.134856   58417 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:33.136220   58417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:33.149028   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0719 15:52:33.149128   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40555
	I0719 15:52:33.149538   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.149646   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.150093   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150108   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.150111   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150119   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.150477   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150603   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.150955   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.150971   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.151326   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
	I0719 15:52:33.151359   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.151715   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.152199   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.152223   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.152574   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.153136   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.153170   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.155187   58417 addons.go:234] Setting addon default-storageclass=true in "no-preload-382231"
	W0719 15:52:33.155207   58417 addons.go:243] addon default-storageclass should already be in state true
	I0719 15:52:33.155235   58417 host.go:66] Checking if "no-preload-382231" exists ...
	I0719 15:52:33.155572   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.155602   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.170886   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0719 15:52:33.170884   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 15:52:33.171439   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.171510   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37745
	I0719 15:52:33.171543   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172005   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172026   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172109   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.172141   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172162   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172538   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.172552   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.172609   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172775   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.172831   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.172875   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.173021   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.173381   58417 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:52:33.173405   58417 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:52:33.175118   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.175500   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.177023   58417 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0719 15:52:33.177041   58417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 15:52:32.000607   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:52:32.000846   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:32.001125   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:33.178348   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:33.178362   58417 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:33.178377   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.178450   58417 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.178469   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 15:52:33.178486   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.182287   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182598   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.182617   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.182741   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.182948   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.183074   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.183204   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.183372   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183940   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.183959   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.183994   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.184237   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.184356   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.184505   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.191628   58417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 15:52:33.191984   58417 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:52:33.192366   58417 main.go:141] libmachine: Using API Version  1
	I0719 15:52:33.192385   58417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:52:33.192707   58417 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:52:33.192866   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetState
	I0719 15:52:33.194285   58417 main.go:141] libmachine: (no-preload-382231) Calling .DriverName
	I0719 15:52:33.194485   58417 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.194499   58417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:33.194514   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHHostname
	I0719 15:52:33.197526   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.197853   58417 main.go:141] libmachine: (no-preload-382231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:09:0a", ip: ""} in network mk-no-preload-382231: {Iface:virbr1 ExpiryTime:2024-07-19 16:37:54 +0000 UTC Type:0 Mac:52:54:00:72:09:0a Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:no-preload-382231 Clientid:01:52:54:00:72:09:0a}
	I0719 15:52:33.197872   58417 main.go:141] libmachine: (no-preload-382231) DBG | domain no-preload-382231 has defined IP address 192.168.39.227 and MAC address 52:54:00:72:09:0a in network mk-no-preload-382231
	I0719 15:52:33.198087   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHPort
	I0719 15:52:33.198335   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHKeyPath
	I0719 15:52:33.198472   58417 main.go:141] libmachine: (no-preload-382231) Calling .GetSSHUsername
	I0719 15:52:33.198604   58417 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/no-preload-382231/id_rsa Username:docker}
	I0719 15:52:33.382687   58417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 15:52:33.403225   58417 node_ready.go:35] waiting up to 6m0s for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430507   58417 node_ready.go:49] node "no-preload-382231" has status "Ready":"True"
	I0719 15:52:33.430535   58417 node_ready.go:38] duration metric: took 27.282654ms for node "no-preload-382231" to be "Ready" ...
	I0719 15:52:33.430546   58417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:33.482352   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.555210   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 15:52:33.565855   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:33.565874   58417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0719 15:52:33.571653   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:33.609541   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:33.609569   58417 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:33.674428   58417 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:33.674455   58417 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:33.746703   58417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:34.092029   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092051   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092341   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092359   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.092369   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.092379   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.092604   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.092628   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.092634   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.093766   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.093785   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094025   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094043   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094076   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.094088   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.094325   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.094343   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.094349   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128393   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.128412   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.128715   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.128766   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.128775   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.319737   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.319764   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320141   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320161   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320165   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.320184   58417 main.go:141] libmachine: Making call to close driver server
	I0719 15:52:34.320199   58417 main.go:141] libmachine: (no-preload-382231) Calling .Close
	I0719 15:52:34.320441   58417 main.go:141] libmachine: Successfully made call to close driver server
	I0719 15:52:34.320462   58417 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 15:52:34.320475   58417 addons.go:475] Verifying addon metrics-server=true in "no-preload-382231"
	I0719 15:52:34.320482   58417 main.go:141] libmachine: (no-preload-382231) DBG | Closing plugin on server side
	I0719 15:52:34.322137   58417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0719 15:52:30.812091   59208 pod_ready.go:81] duration metric: took 4m0.006187238s for pod "metrics-server-569cc877fc-h7hgv" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:30.812113   59208 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:30.812120   59208 pod_ready.go:38] duration metric: took 4m8.614544303s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.812135   59208 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:30.812161   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:30.812208   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:30.861054   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:30.861074   59208 cri.go:89] found id: ""
	I0719 15:52:30.861083   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:30.861144   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.865653   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:30.865708   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:30.900435   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:30.900459   59208 cri.go:89] found id: ""
	I0719 15:52:30.900468   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:30.900512   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.904686   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:30.904747   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:30.950618   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.950638   59208 cri.go:89] found id: ""
	I0719 15:52:30.950646   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:30.950691   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:30.955080   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:30.955147   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:30.996665   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:30.996691   59208 cri.go:89] found id: ""
	I0719 15:52:30.996704   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:30.996778   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.001122   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:31.001191   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:31.042946   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.042969   59208 cri.go:89] found id: ""
	I0719 15:52:31.042979   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:31.043039   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.047311   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:31.047365   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:31.086140   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.086166   59208 cri.go:89] found id: ""
	I0719 15:52:31.086175   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:31.086230   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.091742   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:31.091818   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:31.134209   59208 cri.go:89] found id: ""
	I0719 15:52:31.134241   59208 logs.go:276] 0 containers: []
	W0719 15:52:31.134252   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:31.134260   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:31.134316   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:31.173297   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.173325   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.173331   59208 cri.go:89] found id: ""
	I0719 15:52:31.173353   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:31.173414   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.177951   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:31.182099   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:31.182121   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:31.196541   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:31.196565   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:31.322528   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:31.322555   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:31.369628   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:31.369658   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:31.417834   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:31.417867   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:31.459116   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:31.459145   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:31.500986   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:31.501018   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:31.578557   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:31.578606   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:31.635053   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:31.635082   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:31.692604   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:31.692635   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:31.729765   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:31.729801   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:31.766152   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:31.766177   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:32.301240   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:32.301278   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:30.013083   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:32.013142   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:34.323358   58417 addons.go:510] duration metric: took 1.19112329s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0719 15:52:37.001693   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:37.001896   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:34.849019   59208 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:34.866751   59208 api_server.go:72] duration metric: took 4m20.402312557s to wait for apiserver process to appear ...
	I0719 15:52:34.866779   59208 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:34.866816   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:34.866876   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:34.905505   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.905532   59208 cri.go:89] found id: ""
	I0719 15:52:34.905542   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:34.905609   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.910996   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:34.911069   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:34.958076   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:34.958100   59208 cri.go:89] found id: ""
	I0719 15:52:34.958110   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:34.958166   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:34.962439   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:34.962507   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:34.999095   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:34.999117   59208 cri.go:89] found id: ""
	I0719 15:52:34.999126   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:34.999178   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.003785   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:35.003848   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:35.042585   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.042613   59208 cri.go:89] found id: ""
	I0719 15:52:35.042622   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:35.042683   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.048705   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:35.048770   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:35.092408   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.092435   59208 cri.go:89] found id: ""
	I0719 15:52:35.092444   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:35.092499   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.096983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:35.097050   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:35.135694   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.135717   59208 cri.go:89] found id: ""
	I0719 15:52:35.135726   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:35.135782   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.140145   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:35.140223   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:35.178912   59208 cri.go:89] found id: ""
	I0719 15:52:35.178938   59208 logs.go:276] 0 containers: []
	W0719 15:52:35.178948   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:35.178955   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:35.179015   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:35.229067   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.229090   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.229104   59208 cri.go:89] found id: ""
	I0719 15:52:35.229112   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:35.229172   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.234985   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:35.240098   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:35.240120   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:35.299418   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:35.299449   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:35.316294   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:35.316330   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:35.433573   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:35.433610   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:35.479149   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:35.479181   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:35.526270   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:35.526299   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:35.564209   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:35.564241   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:35.601985   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:35.602020   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:35.669986   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:35.670015   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:35.711544   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:35.711580   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:35.763800   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:35.763831   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:35.822699   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:35.822732   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:35.863377   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:35.863422   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:38.777749   59208 api_server.go:253] Checking apiserver healthz at https://192.168.61.144:8444/healthz ...
	I0719 15:52:38.781984   59208 api_server.go:279] https://192.168.61.144:8444/healthz returned 200:
	ok
	I0719 15:52:38.782935   59208 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:38.782955   59208 api_server.go:131] duration metric: took 3.916169938s to wait for apiserver health ...
	I0719 15:52:38.782963   59208 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:38.782983   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:38.783026   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:38.818364   59208 cri.go:89] found id: "65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:38.818387   59208 cri.go:89] found id: ""
	I0719 15:52:38.818395   59208 logs.go:276] 1 containers: [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236]
	I0719 15:52:38.818442   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.823001   59208 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:38.823054   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:38.857871   59208 cri.go:89] found id: "60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:38.857900   59208 cri.go:89] found id: ""
	I0719 15:52:38.857909   59208 logs.go:276] 1 containers: [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b]
	I0719 15:52:38.857958   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.864314   59208 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:38.864375   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:38.910404   59208 cri.go:89] found id: "001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:38.910434   59208 cri.go:89] found id: ""
	I0719 15:52:38.910445   59208 logs.go:276] 1 containers: [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54]
	I0719 15:52:38.910505   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.915588   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:38.915645   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:38.952981   59208 cri.go:89] found id: "1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:38.953002   59208 cri.go:89] found id: ""
	I0719 15:52:38.953009   59208 logs.go:276] 1 containers: [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a]
	I0719 15:52:38.953055   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:38.957397   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:38.957447   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:39.002973   59208 cri.go:89] found id: "6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.003001   59208 cri.go:89] found id: ""
	I0719 15:52:39.003011   59208 logs.go:276] 1 containers: [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912]
	I0719 15:52:39.003059   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.007496   59208 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:39.007568   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:39.045257   59208 cri.go:89] found id: "c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.045282   59208 cri.go:89] found id: ""
	I0719 15:52:39.045291   59208 logs.go:276] 1 containers: [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b]
	I0719 15:52:39.045351   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.049358   59208 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:39.049415   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:39.083263   59208 cri.go:89] found id: ""
	I0719 15:52:39.083303   59208 logs.go:276] 0 containers: []
	W0719 15:52:39.083314   59208 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:39.083321   59208 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:39.083391   59208 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:39.121305   59208 cri.go:89] found id: "85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.121348   59208 cri.go:89] found id: "5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.121354   59208 cri.go:89] found id: ""
	I0719 15:52:39.121363   59208 logs.go:276] 2 containers: [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b]
	I0719 15:52:39.121421   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.126259   59208 ssh_runner.go:195] Run: which crictl
	I0719 15:52:39.130395   59208 logs.go:123] Gathering logs for kube-scheduler [1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a] ...
	I0719 15:52:39.130413   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f566fdead149c1b8b5f09588f41d04a6a944d3dc5a086ee08358100d0023f9a"
	I0719 15:52:39.171213   59208 logs.go:123] Gathering logs for storage-provisioner [5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b] ...
	I0719 15:52:39.171239   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a58e1c6658a8a3083345fbf718281c0c2654f6a90c1eebc9e0c56ad3dcd0b2b"
	I0719 15:52:39.206545   59208 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:39.206577   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:39.267068   59208 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:39.267105   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:39.373510   59208 logs.go:123] Gathering logs for kube-apiserver [65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236] ...
	I0719 15:52:39.373544   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65610b0e92d1413b8a9f68ee8dfdeb3bc33fa4cc1d9c84368f1cc39965f0d236"
	I0719 15:52:34.512374   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.012559   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:39.013766   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:35.495479   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:37.989424   58417 pod_ready.go:102] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:38.489746   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.489775   58417 pod_ready.go:81] duration metric: took 5.007393051s for pod "coredns-5cfdc65f69-4xxpm" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.489790   58417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495855   58417 pod_ready.go:92] pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:38.495884   58417 pod_ready.go:81] duration metric: took 6.085398ms for pod "coredns-5cfdc65f69-zk22p" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:38.495895   58417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:40.502651   58417 pod_ready.go:102] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:41.503286   58417 pod_ready.go:92] pod "etcd-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.503309   58417 pod_ready.go:81] duration metric: took 3.007406201s for pod "etcd-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.503321   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513225   58417 pod_ready.go:92] pod "kube-apiserver-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.513245   58417 pod_ready.go:81] duration metric: took 9.916405ms for pod "kube-apiserver-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.513256   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517651   58417 pod_ready.go:92] pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.517668   58417 pod_ready.go:81] duration metric: took 4.40518ms for pod "kube-controller-manager-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.517677   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522529   58417 pod_ready.go:92] pod "kube-proxy-qd84x" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.522544   58417 pod_ready.go:81] duration metric: took 4.861257ms for pod "kube-proxy-qd84x" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.522551   58417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687964   58417 pod_ready.go:92] pod "kube-scheduler-no-preload-382231" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:41.687987   58417 pod_ready.go:81] duration metric: took 165.428951ms for pod "kube-scheduler-no-preload-382231" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:41.687997   58417 pod_ready.go:38] duration metric: took 8.257437931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:41.688016   58417 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:41.688069   58417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:41.705213   58417 api_server.go:72] duration metric: took 8.573000368s to wait for apiserver process to appear ...
	I0719 15:52:41.705236   58417 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:41.705256   58417 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0719 15:52:41.709425   58417 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0719 15:52:41.710427   58417 api_server.go:141] control plane version: v1.31.0-beta.0
	I0719 15:52:41.710447   58417 api_server.go:131] duration metric: took 5.203308ms to wait for apiserver health ...
	I0719 15:52:41.710455   58417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:41.890063   58417 system_pods.go:59] 9 kube-system pods found
	I0719 15:52:41.890091   58417 system_pods.go:61] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:41.890095   58417 system_pods.go:61] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:41.890099   58417 system_pods.go:61] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:41.890103   58417 system_pods.go:61] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:41.890106   58417 system_pods.go:61] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:41.890109   58417 system_pods.go:61] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:41.890112   58417 system_pods.go:61] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:41.890117   58417 system_pods.go:61] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:41.890121   58417 system_pods.go:61] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:41.890128   58417 system_pods.go:74] duration metric: took 179.666477ms to wait for pod list to return data ...
	I0719 15:52:41.890135   58417 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.086946   58417 default_sa.go:45] found service account: "default"
	I0719 15:52:42.086973   58417 default_sa.go:55] duration metric: took 196.832888ms for default service account to be created ...
	I0719 15:52:42.086984   58417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.289457   58417 system_pods.go:86] 9 kube-system pods found
	I0719 15:52:42.289483   58417 system_pods.go:89] "coredns-5cfdc65f69-4xxpm" [8ff50d32-70e5-4821-b161-9c0bf4de6a2a] Running
	I0719 15:52:42.289489   58417 system_pods.go:89] "coredns-5cfdc65f69-zk22p" [03dcb169-2796-4dbd-8ccf-383e07d90b44] Running
	I0719 15:52:42.289493   58417 system_pods.go:89] "etcd-no-preload-382231" [767ea6db-fab3-417b-8329-f83b2e180e3f] Running
	I0719 15:52:42.289498   58417 system_pods.go:89] "kube-apiserver-no-preload-382231" [7a1364f2-ccfd-4def-8ff0-ce3c2aee7fa6] Running
	I0719 15:52:42.289502   58417 system_pods.go:89] "kube-controller-manager-no-preload-382231" [4919e46d-4294-4d5f-a4ad-8a9fa20d57ef] Running
	I0719 15:52:42.289506   58417 system_pods.go:89] "kube-proxy-qd84x" [73ebfa49-3a5a-44c0-948a-233d7a147bdd] Running
	I0719 15:52:42.289510   58417 system_pods.go:89] "kube-scheduler-no-preload-382231" [0b03a96f-409c-4816-88e5-bb4030ac87d1] Running
	I0719 15:52:42.289518   58417 system_pods.go:89] "metrics-server-78fcd8795b-rc6ft" [5348ffd6-5e80-4533-bc25-3dcd08c43ff4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.289523   58417 system_pods.go:89] "storage-provisioner" [91ccf728-07fe-4b05-823e-513e1a3c3505] Running
	I0719 15:52:42.289530   58417 system_pods.go:126] duration metric: took 202.54151ms to wait for k8s-apps to be running ...
	I0719 15:52:42.289536   58417 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.289575   58417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.304866   58417 system_svc.go:56] duration metric: took 15.319153ms WaitForService to wait for kubelet
	I0719 15:52:42.304931   58417 kubeadm.go:582] duration metric: took 9.172718104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.304958   58417 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.488087   58417 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.488108   58417 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.488122   58417 node_conditions.go:105] duration metric: took 183.159221ms to run NodePressure ...
	I0719 15:52:42.488135   58417 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.488144   58417 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.488157   58417 start.go:255] writing updated cluster config ...
	I0719 15:52:42.488453   58417 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.536465   58417 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0719 15:52:42.538606   58417 out.go:177] * Done! kubectl is now configured to use "no-preload-382231" cluster and "default" namespace by default
	I0719 15:52:39.422000   59208 logs.go:123] Gathering logs for etcd [60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b] ...
	I0719 15:52:39.422034   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60e7b95877d59397bdfb26be1805b291d0f70a1774d5dc3cd595788f3e0eb64b"
	I0719 15:52:39.473826   59208 logs.go:123] Gathering logs for coredns [001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54] ...
	I0719 15:52:39.473860   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001c96d3b96693d3dbd52343d49a3654a8f300e3c7972f59702684a99d721f54"
	I0719 15:52:39.515998   59208 logs.go:123] Gathering logs for container status ...
	I0719 15:52:39.516023   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:39.559475   59208 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:39.559506   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:39.574174   59208 logs.go:123] Gathering logs for kube-proxy [6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912] ...
	I0719 15:52:39.574205   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d295bc6e6fb82af6b0e02b859260190ce84d22ae7c110d8711c8b1d251ab912"
	I0719 15:52:39.615906   59208 logs.go:123] Gathering logs for kube-controller-manager [c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b] ...
	I0719 15:52:39.615933   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c693018988910e96f55491dc3a8c1ca0e4c73d2d0051831b2715a61dcb4e257b"
	I0719 15:52:39.676764   59208 logs.go:123] Gathering logs for storage-provisioner [85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c] ...
	I0719 15:52:39.676795   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85352e7e71d12384314aac07c14c411d197de25acf9bfe1e24cc2a0f1de7518c"
	I0719 15:52:39.714437   59208 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:39.714467   59208 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:42.584088   59208 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:42.584114   59208 system_pods.go:61] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.584119   59208 system_pods.go:61] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.584123   59208 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.584127   59208 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.584130   59208 system_pods.go:61] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.584133   59208 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.584138   59208 system_pods.go:61] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.584143   59208 system_pods.go:61] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.584150   59208 system_pods.go:74] duration metric: took 3.801182741s to wait for pod list to return data ...
	I0719 15:52:42.584156   59208 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:42.586910   59208 default_sa.go:45] found service account: "default"
	I0719 15:52:42.586934   59208 default_sa.go:55] duration metric: took 2.771722ms for default service account to be created ...
	I0719 15:52:42.586943   59208 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:42.593611   59208 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:42.593634   59208 system_pods.go:89] "coredns-7db6d8ff4d-z7865" [c756208f-51b9-4a5a-932e-d7d38408a532] Running
	I0719 15:52:42.593639   59208 system_pods.go:89] "etcd-default-k8s-diff-port-601445" [6f4482cc-d34b-42f0-be36-fdc0854a99da] Running
	I0719 15:52:42.593645   59208 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-601445" [837558be-bc58-4260-9812-358cdf349123] Running
	I0719 15:52:42.593650   59208 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-601445" [ebe3a64d-83ea-484c-8e1a-5a310bd8cf12] Running
	I0719 15:52:42.593654   59208 system_pods.go:89] "kube-proxy-r7b2z" [24eff210-56a6-4b1b-bc19-7c492c5ce997] Running
	I0719 15:52:42.593658   59208 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-601445" [8a1f864c-f201-45cf-afb5-ac3ea10b6a7f] Running
	I0719 15:52:42.593669   59208 system_pods.go:89] "metrics-server-569cc877fc-h7hgv" [9b4cdf2e-e6fc-4d88-99f1-31066805f915] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:42.593673   59208 system_pods.go:89] "storage-provisioner" [4dd721a2-a6f5-4aad-b86d-692d351a6fcf] Running
	I0719 15:52:42.593680   59208 system_pods.go:126] duration metric: took 6.731347ms to wait for k8s-apps to be running ...
	I0719 15:52:42.593687   59208 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:42.593726   59208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:42.615811   59208 system_svc.go:56] duration metric: took 22.114487ms WaitForService to wait for kubelet
	I0719 15:52:42.615841   59208 kubeadm.go:582] duration metric: took 4m28.151407807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:42.615864   59208 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:42.619021   59208 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:42.619040   59208 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:42.619050   59208 node_conditions.go:105] duration metric: took 3.180958ms to run NodePressure ...
	I0719 15:52:42.619060   59208 start.go:241] waiting for startup goroutines ...
	I0719 15:52:42.619067   59208 start.go:246] waiting for cluster config update ...
	I0719 15:52:42.619079   59208 start.go:255] writing updated cluster config ...
	I0719 15:52:42.619329   59208 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:42.677117   59208 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:42.679317   59208 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-601445" cluster and "default" namespace by default
	I0719 15:52:41.514013   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:44.012173   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:47.002231   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:52:47.002432   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:52:46.013717   58376 pod_ready.go:102] pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:48.013121   58376 pod_ready.go:81] duration metric: took 4m0.006772624s for pod "metrics-server-569cc877fc-2tsch" in "kube-system" namespace to be "Ready" ...
	E0719 15:52:48.013143   58376 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0719 15:52:48.013150   58376 pod_ready.go:38] duration metric: took 4m4.417474484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:48.013165   58376 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:48.013194   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:48.013234   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:48.067138   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.067166   58376 cri.go:89] found id: ""
	I0719 15:52:48.067175   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:48.067218   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.071486   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:48.071531   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:48.115491   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.115514   58376 cri.go:89] found id: ""
	I0719 15:52:48.115525   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:48.115583   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.119693   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:48.119750   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:48.161158   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.161185   58376 cri.go:89] found id: ""
	I0719 15:52:48.161194   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:48.161257   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.165533   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:48.165584   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:48.207507   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.207528   58376 cri.go:89] found id: ""
	I0719 15:52:48.207537   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:48.207596   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.212070   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:48.212145   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:48.250413   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.250441   58376 cri.go:89] found id: ""
	I0719 15:52:48.250451   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:48.250510   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.255025   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:48.255095   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:48.289898   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.289922   58376 cri.go:89] found id: ""
	I0719 15:52:48.289930   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:48.289976   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.294440   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:48.294489   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:48.329287   58376 cri.go:89] found id: ""
	I0719 15:52:48.329314   58376 logs.go:276] 0 containers: []
	W0719 15:52:48.329326   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:48.329332   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:48.329394   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:48.373215   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.373242   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.373248   58376 cri.go:89] found id: ""
	I0719 15:52:48.373257   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:48.373311   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.377591   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:48.381610   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:48.381635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:48.440106   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:48.440148   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:48.455200   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:48.455234   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:48.496729   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:48.496757   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:48.535475   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:48.535501   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:48.592954   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:48.592993   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:48.635925   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:48.635957   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:48.671611   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:48.671642   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:48.809648   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:48.809681   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:48.863327   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:48.863361   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:48.902200   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:48.902245   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:48.937497   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:48.937525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:49.446900   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:49.446933   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:51.988535   58376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:52.005140   58376 api_server.go:72] duration metric: took 4m16.116469116s to wait for apiserver process to appear ...
	I0719 15:52:52.005165   58376 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:52.005206   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:52.005258   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:52.041113   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.041143   58376 cri.go:89] found id: ""
	I0719 15:52:52.041150   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:52.041199   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.045292   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:52.045349   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:52.086747   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.086770   58376 cri.go:89] found id: ""
	I0719 15:52:52.086778   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:52.086821   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.091957   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:52.092015   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:52.128096   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.128128   58376 cri.go:89] found id: ""
	I0719 15:52:52.128138   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:52.128204   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.132889   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:52.132949   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:52.168359   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.168389   58376 cri.go:89] found id: ""
	I0719 15:52:52.168398   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:52.168454   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.172577   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:52.172639   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:52.211667   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.211684   58376 cri.go:89] found id: ""
	I0719 15:52:52.211691   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:52.211740   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.215827   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:52.215893   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:52.252105   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.252130   58376 cri.go:89] found id: ""
	I0719 15:52:52.252140   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:52.252194   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.256407   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:52.256464   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:52.292646   58376 cri.go:89] found id: ""
	I0719 15:52:52.292675   58376 logs.go:276] 0 containers: []
	W0719 15:52:52.292685   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:52.292693   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:52.292755   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:52.326845   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.326875   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.326880   58376 cri.go:89] found id: ""
	I0719 15:52:52.326889   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:52.326946   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.331338   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:52.335530   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:52.335554   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:52.371981   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:52.372010   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:52.406921   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:52.406946   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:52.442975   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:52.443007   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:52.497838   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:52.497873   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:52.556739   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:52.556776   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:52.665610   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:52.665643   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:52.711547   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:52.711580   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:52.759589   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:52.759634   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:52.807300   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:52.807374   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:52.857159   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:52.857186   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:52.917896   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:52.917931   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:53.342603   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:53.342646   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:55.857727   58376 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I0719 15:52:55.861835   58376 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I0719 15:52:55.862804   58376 api_server.go:141] control plane version: v1.30.3
	I0719 15:52:55.862822   58376 api_server.go:131] duration metric: took 3.857650801s to wait for apiserver health ...
	I0719 15:52:55.862829   58376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:55.862852   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:52:55.862905   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:52:55.900840   58376 cri.go:89] found id: "e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:55.900859   58376 cri.go:89] found id: ""
	I0719 15:52:55.900866   58376 logs.go:276] 1 containers: [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676]
	I0719 15:52:55.900909   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.906205   58376 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:52:55.906291   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:52:55.950855   58376 cri.go:89] found id: "b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:55.950879   58376 cri.go:89] found id: ""
	I0719 15:52:55.950887   58376 logs.go:276] 1 containers: [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2]
	I0719 15:52:55.950939   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.955407   58376 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:52:55.955472   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:52:55.994954   58376 cri.go:89] found id: "79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:55.994981   58376 cri.go:89] found id: ""
	I0719 15:52:55.994992   58376 logs.go:276] 1 containers: [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004]
	I0719 15:52:55.995052   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:55.999179   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:52:55.999241   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:52:56.036497   58376 cri.go:89] found id: "f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.036521   58376 cri.go:89] found id: ""
	I0719 15:52:56.036530   58376 logs.go:276] 1 containers: [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10]
	I0719 15:52:56.036585   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.041834   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:52:56.041900   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:52:56.082911   58376 cri.go:89] found id: "760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.082934   58376 cri.go:89] found id: ""
	I0719 15:52:56.082943   58376 logs.go:276] 1 containers: [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32]
	I0719 15:52:56.082998   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.087505   58376 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:52:56.087571   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:52:56.124517   58376 cri.go:89] found id: "4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.124544   58376 cri.go:89] found id: ""
	I0719 15:52:56.124554   58376 logs.go:276] 1 containers: [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56]
	I0719 15:52:56.124616   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.129221   58376 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:52:56.129297   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:52:56.170151   58376 cri.go:89] found id: ""
	I0719 15:52:56.170177   58376 logs.go:276] 0 containers: []
	W0719 15:52:56.170193   58376 logs.go:278] No container was found matching "kindnet"
	I0719 15:52:56.170212   58376 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0719 15:52:56.170292   58376 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0719 15:52:56.218351   58376 cri.go:89] found id: "33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:56.218377   58376 cri.go:89] found id: "4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.218381   58376 cri.go:89] found id: ""
	I0719 15:52:56.218388   58376 logs.go:276] 2 containers: [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff]
	I0719 15:52:56.218437   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.223426   58376 ssh_runner.go:195] Run: which crictl
	I0719 15:52:56.227742   58376 logs.go:123] Gathering logs for storage-provisioner [4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff] ...
	I0719 15:52:56.227759   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab77ba1bf35a4e6f6d565e8b530fb77db1c6e4faba02990302a995666bb9aff"
	I0719 15:52:56.271701   58376 logs.go:123] Gathering logs for kubelet ...
	I0719 15:52:56.271733   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:52:56.325333   58376 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:52:56.325366   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 15:52:56.431391   58376 logs.go:123] Gathering logs for kube-apiserver [e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676] ...
	I0719 15:52:56.431423   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e92e20675555d3362397d7b230fe098fdf4ae0414c6829f46c14efbaa9d72676"
	I0719 15:52:56.485442   58376 logs.go:123] Gathering logs for etcd [b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2] ...
	I0719 15:52:56.485472   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5cdfd8260b76f45de954e95a3d5bd125df4b503d93150b411b174ac26829cc2"
	I0719 15:52:56.527493   58376 logs.go:123] Gathering logs for kube-scheduler [f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10] ...
	I0719 15:52:56.527525   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f82d9ede0d89b955bcfb2488d7e496196b71e9102991cee11f4221b0311d0e10"
	I0719 15:52:56.563260   58376 logs.go:123] Gathering logs for kube-proxy [760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32] ...
	I0719 15:52:56.563289   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 760d42fba7d1a5fe7f5c0435804843b9c82bc3f9d2ba1310e90c2ca8fd21af32"
	I0719 15:52:56.600604   58376 logs.go:123] Gathering logs for kube-controller-manager [4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56] ...
	I0719 15:52:56.600635   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c26eb67ddb9aa7d189be8b3bdeddb7553fab69fd701f85296eb698035ef4d56"
	I0719 15:52:56.656262   58376 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:52:56.656305   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:52:57.031511   58376 logs.go:123] Gathering logs for dmesg ...
	I0719 15:52:57.031549   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:52:57.046723   58376 logs.go:123] Gathering logs for coredns [79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004] ...
	I0719 15:52:57.046748   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79faf7b7b4478007e5560cd6a84dcea1c8f54751ff439aa60d9d712e24b18004"
	I0719 15:52:57.083358   58376 logs.go:123] Gathering logs for storage-provisioner [33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3] ...
	I0719 15:52:57.083390   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ca90d25224cdb09b0d1ab6379643b324eb9aa270aa5c00ddd472a31a8b1fb3"
	I0719 15:52:57.124108   58376 logs.go:123] Gathering logs for container status ...
	I0719 15:52:57.124136   58376 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 15:52:59.670804   58376 system_pods.go:59] 8 kube-system pods found
	I0719 15:52:59.670831   58376 system_pods.go:61] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.670836   58376 system_pods.go:61] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.670840   58376 system_pods.go:61] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.670844   58376 system_pods.go:61] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.670847   58376 system_pods.go:61] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.670850   58376 system_pods.go:61] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.670855   58376 system_pods.go:61] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.670859   58376 system_pods.go:61] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.670865   58376 system_pods.go:74] duration metric: took 3.808031391s to wait for pod list to return data ...
	I0719 15:52:59.670871   58376 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:59.673231   58376 default_sa.go:45] found service account: "default"
	I0719 15:52:59.673249   58376 default_sa.go:55] duration metric: took 2.372657ms for default service account to be created ...
	I0719 15:52:59.673255   58376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:59.678267   58376 system_pods.go:86] 8 kube-system pods found
	I0719 15:52:59.678289   58376 system_pods.go:89] "coredns-7db6d8ff4d-n945p" [73e2090d-a652-4716-b47e-be8f3b3679fa] Running
	I0719 15:52:59.678296   58376 system_pods.go:89] "etcd-embed-certs-817144" [ff1a0f5d-dc49-4c01-acd4-14181696ed15] Running
	I0719 15:52:59.678303   58376 system_pods.go:89] "kube-apiserver-embed-certs-817144" [b158c39a-babc-44d8-a33a-0bbe4614536e] Running
	I0719 15:52:59.678310   58376 system_pods.go:89] "kube-controller-manager-embed-certs-817144" [439dcf47-d3e6-462f-8687-09cc0be5b8c3] Running
	I0719 15:52:59.678315   58376 system_pods.go:89] "kube-proxy-4d4g9" [93ffa175-3bfe-4477-be1a-82238d78b186] Running
	I0719 15:52:59.678322   58376 system_pods.go:89] "kube-scheduler-embed-certs-817144" [c8c53762-4b36-49a4-8e13-935c22ced83f] Running
	I0719 15:52:59.678331   58376 system_pods.go:89] "metrics-server-569cc877fc-2tsch" [809cb05e-d781-476e-a84b-dd009d044ac5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 15:52:59.678341   58376 system_pods.go:89] "storage-provisioner" [dd14f391-0850-487a-b394-4e243265e2ae] Running
	I0719 15:52:59.678352   58376 system_pods.go:126] duration metric: took 5.090968ms to wait for k8s-apps to be running ...
	I0719 15:52:59.678362   58376 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:59.678411   58376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:59.695116   58376 system_svc.go:56] duration metric: took 16.750228ms WaitForService to wait for kubelet
	I0719 15:52:59.695139   58376 kubeadm.go:582] duration metric: took 4m23.806469478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:52:59.695163   58376 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:59.697573   58376 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 15:52:59.697592   58376 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:59.697602   58376 node_conditions.go:105] duration metric: took 2.433643ms to run NodePressure ...
	I0719 15:52:59.697612   58376 start.go:241] waiting for startup goroutines ...
	I0719 15:52:59.697618   58376 start.go:246] waiting for cluster config update ...
	I0719 15:52:59.697629   58376 start.go:255] writing updated cluster config ...
	I0719 15:52:59.697907   58376 ssh_runner.go:195] Run: rm -f paused
	I0719 15:52:59.744965   58376 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 15:52:59.746888   58376 out.go:177] * Done! kubectl is now configured to use "embed-certs-817144" cluster and "default" namespace by default
	I0719 15:53:07.003006   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:07.003249   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004552   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:53:47.004805   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:53:47.004816   58817 kubeadm.go:310] 
	I0719 15:53:47.004902   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:53:47.004996   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:53:47.005020   58817 kubeadm.go:310] 
	I0719 15:53:47.005068   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:53:47.005117   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:53:47.005246   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:53:47.005262   58817 kubeadm.go:310] 
	I0719 15:53:47.005397   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:53:47.005458   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:53:47.005508   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:53:47.005522   58817 kubeadm.go:310] 
	I0719 15:53:47.005643   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:53:47.005714   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:53:47.005720   58817 kubeadm.go:310] 
	I0719 15:53:47.005828   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:53:47.005924   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:53:47.005987   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:53:47.006080   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:53:47.006092   58817 kubeadm.go:310] 
	I0719 15:53:47.006824   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:53:47.006941   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:53:47.007028   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0719 15:53:47.007180   58817 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 15:53:47.007244   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 15:53:47.468272   58817 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:53:47.483560   58817 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:53:47.494671   58817 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:53:47.494691   58817 kubeadm.go:157] found existing configuration files:
	
	I0719 15:53:47.494742   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 15:53:47.503568   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 15:53:47.503630   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 15:53:47.512606   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 15:53:47.521247   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 15:53:47.521303   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 15:53:47.530361   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.539748   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 15:53:47.539799   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 15:53:47.549243   58817 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 15:53:47.559306   58817 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 15:53:47.559369   58817 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 15:53:47.570095   58817 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:53:47.648871   58817 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 15:53:47.649078   58817 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 15:53:47.792982   58817 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:53:47.793141   58817 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:53:47.793254   58817 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:53:47.992636   58817 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:53:47.994547   58817 out.go:204]   - Generating certificates and keys ...
	I0719 15:53:47.994648   58817 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 15:53:47.994734   58817 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 15:53:47.994866   58817 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 15:53:47.994963   58817 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 15:53:47.995077   58817 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 15:53:47.995148   58817 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 15:53:47.995250   58817 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 15:53:47.995336   58817 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 15:53:47.995447   58817 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 15:53:47.995549   58817 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 15:53:47.995603   58817 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 15:53:47.995685   58817 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:53:48.092671   58817 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:53:48.256432   58817 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:53:48.334799   58817 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:53:48.483435   58817 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:53:48.504681   58817 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:53:48.505503   58817 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:53:48.505553   58817 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 15:53:48.654795   58817 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:53:48.656738   58817 out.go:204]   - Booting up control plane ...
	I0719 15:53:48.656849   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:53:48.664278   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:53:48.665556   58817 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:53:48.666292   58817 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:53:48.668355   58817 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:54:28.670119   58817 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 15:54:28.670451   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:28.670679   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:33.671159   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:33.671408   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:54:43.671899   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:54:43.672129   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:03.673219   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:03.673444   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674003   58817 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 15:55:43.674282   58817 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 15:55:43.674311   58817 kubeadm.go:310] 
	I0719 15:55:43.674362   58817 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 15:55:43.674430   58817 kubeadm.go:310] 		timed out waiting for the condition
	I0719 15:55:43.674439   58817 kubeadm.go:310] 
	I0719 15:55:43.674479   58817 kubeadm.go:310] 	This error is likely caused by:
	I0719 15:55:43.674551   58817 kubeadm.go:310] 		- The kubelet is not running
	I0719 15:55:43.674694   58817 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 15:55:43.674711   58817 kubeadm.go:310] 
	I0719 15:55:43.674872   58817 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 15:55:43.674923   58817 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 15:55:43.674973   58817 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 15:55:43.674987   58817 kubeadm.go:310] 
	I0719 15:55:43.675076   58817 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 15:55:43.675185   58817 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 15:55:43.675204   58817 kubeadm.go:310] 
	I0719 15:55:43.675343   58817 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 15:55:43.675486   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 15:55:43.675593   58817 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 15:55:43.675698   58817 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 15:55:43.675712   58817 kubeadm.go:310] 
	I0719 15:55:43.676679   58817 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:55:43.676793   58817 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 15:55:43.676881   58817 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 15:55:43.676950   58817 kubeadm.go:394] duration metric: took 7m56.357000435s to StartCluster
	I0719 15:55:43.677009   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 15:55:43.677063   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 15:55:43.720714   58817 cri.go:89] found id: ""
	I0719 15:55:43.720746   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.720757   58817 logs.go:278] No container was found matching "kube-apiserver"
	I0719 15:55:43.720765   58817 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 15:55:43.720832   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 15:55:43.758961   58817 cri.go:89] found id: ""
	I0719 15:55:43.758987   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.758995   58817 logs.go:278] No container was found matching "etcd"
	I0719 15:55:43.759001   58817 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 15:55:43.759048   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 15:55:43.798844   58817 cri.go:89] found id: ""
	I0719 15:55:43.798872   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.798882   58817 logs.go:278] No container was found matching "coredns"
	I0719 15:55:43.798889   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 15:55:43.798960   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 15:55:43.835395   58817 cri.go:89] found id: ""
	I0719 15:55:43.835418   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.835426   58817 logs.go:278] No container was found matching "kube-scheduler"
	I0719 15:55:43.835432   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 15:55:43.835499   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 15:55:43.871773   58817 cri.go:89] found id: ""
	I0719 15:55:43.871800   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.871810   58817 logs.go:278] No container was found matching "kube-proxy"
	I0719 15:55:43.871817   58817 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 15:55:43.871881   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 15:55:43.903531   58817 cri.go:89] found id: ""
	I0719 15:55:43.903552   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.903559   58817 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 15:55:43.903565   58817 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 15:55:43.903613   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 15:55:43.943261   58817 cri.go:89] found id: ""
	I0719 15:55:43.943288   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.943299   58817 logs.go:278] No container was found matching "kindnet"
	I0719 15:55:43.943306   58817 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0719 15:55:43.943364   58817 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0719 15:55:43.980788   58817 cri.go:89] found id: ""
	I0719 15:55:43.980815   58817 logs.go:276] 0 containers: []
	W0719 15:55:43.980826   58817 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0719 15:55:43.980837   58817 logs.go:123] Gathering logs for kubelet ...
	I0719 15:55:43.980853   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 15:55:44.033880   58817 logs.go:123] Gathering logs for dmesg ...
	I0719 15:55:44.033922   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 15:55:44.048683   58817 logs.go:123] Gathering logs for describe nodes ...
	I0719 15:55:44.048709   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 15:55:44.129001   58817 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 15:55:44.129028   58817 logs.go:123] Gathering logs for CRI-O ...
	I0719 15:55:44.129043   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 15:55:44.245246   58817 logs.go:123] Gathering logs for container status ...
	I0719 15:55:44.245282   58817 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0719 15:55:44.303587   58817 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 15:55:44.303632   58817 out.go:239] * 
	W0719 15:55:44.303689   58817 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.303716   58817 out.go:239] * 
	W0719 15:55:44.304733   58817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:55:44.308714   58817 out.go:177] 
	W0719 15:55:44.310103   58817 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 15:55:44.310163   58817 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 15:55:44.310190   58817 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 15:55:44.311707   58817 out.go:177] 
	
	
	==> CRI-O <==
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.608600178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405194608579577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9010df60-44e9-40c2-9426-196aae58c53f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.609157670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d21abf1e-ffd9-4eab-8b12-8622cb001c06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.609223389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d21abf1e-ffd9-4eab-8b12-8622cb001c06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.609254996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d21abf1e-ffd9-4eab-8b12-8622cb001c06 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.641313791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53714fa6-57b8-4a48-af28-27c49b5a0ac8 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.641398918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53714fa6-57b8-4a48-af28-27c49b5a0ac8 name=/runtime.v1.RuntimeService/Version
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.642678105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd436448-37d1-4a65-a319-2517f3bcaecb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.643056169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405194643032353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd436448-37d1-4a65-a319-2517f3bcaecb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.643710613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=defee063-a968-44d1-b87e-20dd99c3c18d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.643777371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=defee063-a968-44d1-b87e-20dd99c3c18d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.643831316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=defee063-a968-44d1-b87e-20dd99c3c18d name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.676581894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=763482ff-49b4-477d-b11f-e7e545b8c41e name=/runtime.v1.RuntimeService/Version
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.676651996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=763482ff-49b4-477d-b11f-e7e545b8c41e name=/runtime.v1.RuntimeService/Version
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.677697745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=607bf786-dd81-41b9-a12d-779a7c1ec2d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.678167803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405194678145256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=607bf786-dd81-41b9-a12d-779a7c1ec2d7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.678781679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8790cde-ee41-4f6f-b18a-b68740572b1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.678828956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8790cde-ee41-4f6f-b18a-b68740572b1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.678864508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b8790cde-ee41-4f6f-b18a-b68740572b1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.709172954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d089bc1a-fafc-454b-a5b1-d18d56dabdfd name=/runtime.v1.RuntimeService/Version
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.709238599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d089bc1a-fafc-454b-a5b1-d18d56dabdfd name=/runtime.v1.RuntimeService/Version
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.710216981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fef6f92-74ad-43de-bd6d-d871cccf617a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.710620397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721405194710590731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fef6f92-74ad-43de-bd6d-d871cccf617a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.711227341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=590774b3-fa4b-4751-906e-65b63d0dccb1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.711274518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=590774b3-fa4b-4751-906e-65b63d0dccb1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 16:06:34 old-k8s-version-862924 crio[647]: time="2024-07-19 16:06:34.711309462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=590774b3-fa4b-4751-906e-65b63d0dccb1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul19 15:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051724] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039649] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.567082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.332449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.594221] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.070476] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.062261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077473] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.217641] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.149423] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.267895] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +6.718838] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.061033] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.715316] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[ +12.018302] kauditd_printk_skb: 46 callbacks suppressed
	[Jul19 15:51] systemd-fstab-generator[5022]: Ignoring "noauto" option for root device
	[Jul19 15:53] systemd-fstab-generator[5300]: Ignoring "noauto" option for root device
	[  +0.062109] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 16:06:34 up 19 min,  0 users,  load average: 0.08, 0.05, 0.03
	Linux old-k8s-version-862924 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a4f150, 0xc000bac980)
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: goroutine 165 [syscall]:
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: syscall.Syscall6(0xe8, 0xc, 0xc000d0fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000d0fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000a5b860, 0x0, 0x0, 0x0)
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0007d18b0)
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 19 16:06:33 old-k8s-version-862924 kubelet[6738]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 19 16:06:33 old-k8s-version-862924 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 19 16:06:33 old-k8s-version-862924 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 19 16:06:34 old-k8s-version-862924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Jul 19 16:06:34 old-k8s-version-862924 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 19 16:06:34 old-k8s-version-862924 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 19 16:06:34 old-k8s-version-862924 kubelet[6785]: I0719 16:06:34.603497    6785 server.go:416] Version: v1.20.0
	Jul 19 16:06:34 old-k8s-version-862924 kubelet[6785]: I0719 16:06:34.603787    6785 server.go:837] Client rotation is on, will bootstrap in background
	Jul 19 16:06:34 old-k8s-version-862924 kubelet[6785]: I0719 16:06:34.605805    6785 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 19 16:06:34 old-k8s-version-862924 kubelet[6785]: I0719 16:06:34.607336    6785 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Jul 19 16:06:34 old-k8s-version-862924 kubelet[6785]: W0719 16:06:34.607373    6785 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 2 (221.764891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-862924" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (104.96s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.29
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 13.83
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.48
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 53.59
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 122.16
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 146.5
38 TestAddons/parallel/Registry 16.74
40 TestAddons/parallel/InspektorGadget 17.1
42 TestAddons/parallel/HelmTiller 13.52
44 TestAddons/parallel/CSI 85.15
45 TestAddons/parallel/Headlamp 13.03
46 TestAddons/parallel/CloudSpanner 5.59
47 TestAddons/parallel/LocalPath 56.1
48 TestAddons/parallel/NvidiaDevicePlugin 5.79
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.12
55 TestCertOptions 108.58
56 TestCertExpiration 305.43
58 TestForceSystemdFlag 74.77
59 TestForceSystemdEnv 62.02
61 TestKVMDriverInstallOrUpdate 5.37
65 TestErrorSpam/setup 40.62
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.54
69 TestErrorSpam/unpause 1.58
70 TestErrorSpam/stop 4.79
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 96.53
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 50.52
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.08
82 TestFunctional/serial/CacheCmd/cache/add_local 2.15
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
87 TestFunctional/serial/CacheCmd/cache/delete 0.08
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 34.49
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.38
93 TestFunctional/serial/LogsFileCmd 1.41
94 TestFunctional/serial/InvalidService 4.88
96 TestFunctional/parallel/ConfigCmd 0.34
97 TestFunctional/parallel/DashboardCmd 45.3
98 TestFunctional/parallel/DryRun 0.26
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.98
104 TestFunctional/parallel/ServiceCmdConnect 8.54
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 31.79
108 TestFunctional/parallel/SSHCmd 0.36
109 TestFunctional/parallel/CpCmd 1.33
110 TestFunctional/parallel/MySQL 27.59
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.4
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
120 TestFunctional/parallel/License 0.62
121 TestFunctional/parallel/ServiceCmd/DeployApp 12.23
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
132 TestFunctional/parallel/MountCmd/any-port 33.74
133 TestFunctional/parallel/ProfileCmd/profile_list 0.33
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
135 TestFunctional/parallel/ServiceCmd/List 0.82
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
138 TestFunctional/parallel/ServiceCmd/Format 0.77
139 TestFunctional/parallel/ServiceCmd/URL 0.31
140 TestFunctional/parallel/Version/short 0.05
141 TestFunctional/parallel/Version/components 0.89
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.52
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.36
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.49
146 TestFunctional/parallel/ImageCommands/ImageBuild 5.33
147 TestFunctional/parallel/ImageCommands/Setup 2.01
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.25
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
158 TestFunctional/parallel/MountCmd/specific-port 1.53
159 TestFunctional/parallel/MountCmd/VerifyCleanup 0.64
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 270.96
167 TestMultiControlPlane/serial/DeployApp 7.69
168 TestMultiControlPlane/serial/PingHostFromPods 1.2
169 TestMultiControlPlane/serial/AddWorkerNode 59.15
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.65
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.05
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
181 TestMultiControlPlane/serial/RestartCluster 353.91
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
183 TestMultiControlPlane/serial/AddSecondaryNode 74.72
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
188 TestJSONOutput/start/Command 54.77
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.71
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.61
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.37
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 83.67
220 TestMountStart/serial/StartWithMountFirst 25.35
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 27.38
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.87
225 TestMountStart/serial/VerifyMountPostDelete 0.35
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 23.65
228 TestMountStart/serial/VerifyMountPostStop 0.35
231 TestMultiNode/serial/FreshStart2Nodes 116.72
232 TestMultiNode/serial/DeployApp2Nodes 6.41
233 TestMultiNode/serial/PingHostFrom2Pods 0.75
234 TestMultiNode/serial/AddNode 54.34
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.2
237 TestMultiNode/serial/CopyFile 6.87
238 TestMultiNode/serial/StopNode 2.21
239 TestMultiNode/serial/StartAfterStop 39.65
241 TestMultiNode/serial/DeleteNode 2.2
243 TestMultiNode/serial/RestartMultiNode 176.5
244 TestMultiNode/serial/ValidateNameConflict 43.45
251 TestScheduledStopUnix 113.69
255 TestRunningBinaryUpgrade 194.55
259 TestStoppedBinaryUpgrade/Setup 2.69
260 TestStoppedBinaryUpgrade/Upgrade 143.96
269 TestPause/serial/Start 106.29
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
273 TestNoKubernetes/serial/StartWithK8s 46.38
277 TestNoKubernetes/serial/StartWithStopK8s 7.13
282 TestNetworkPlugins/group/false 2.88
286 TestNoKubernetes/serial/Start 28.75
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
289 TestNoKubernetes/serial/ProfileList 1.07
290 TestNoKubernetes/serial/Stop 1.29
291 TestNoKubernetes/serial/StartNoArgs 46.89
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
296 TestStartStop/group/no-preload/serial/FirstStart 136.12
298 TestStartStop/group/embed-certs/serial/FirstStart 128.89
299 TestStartStop/group/embed-certs/serial/DeployApp 10.29
300 TestStartStop/group/no-preload/serial/DeployApp 9.31
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.45
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
314 TestStartStop/group/embed-certs/serial/SecondStart 635.57
315 TestStartStop/group/no-preload/serial/SecondStart 617.86
316 TestStartStop/group/old-k8s-version/serial/Stop 4.28
317 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 483.6
330 TestStartStop/group/newest-cni/serial/FirstStart 48.62
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
333 TestStartStop/group/newest-cni/serial/Stop 7.37
334 TestNetworkPlugins/group/auto/Start 94.64
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
336 TestStartStop/group/newest-cni/serial/SecondStart 78.68
337 TestNetworkPlugins/group/kindnet/Start 120.25
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
341 TestStartStop/group/newest-cni/serial/Pause 4.87
342 TestNetworkPlugins/group/flannel/Start 85.49
343 TestNetworkPlugins/group/auto/KubeletFlags 0.23
344 TestNetworkPlugins/group/auto/NetCatPod 10.29
345 TestNetworkPlugins/group/auto/DNS 0.18
346 TestNetworkPlugins/group/auto/Localhost 0.12
347 TestNetworkPlugins/group/auto/HairPin 0.16
348 TestNetworkPlugins/group/enable-default-cni/Start 101.16
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
351 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
352 TestNetworkPlugins/group/kindnet/DNS 0.16
353 TestNetworkPlugins/group/kindnet/Localhost 0.13
354 TestNetworkPlugins/group/kindnet/HairPin 0.12
355 TestNetworkPlugins/group/bridge/Start 61.8
356 TestNetworkPlugins/group/flannel/ControllerPod 6.01
357 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
358 TestNetworkPlugins/group/flannel/NetCatPod 11.24
359 TestNetworkPlugins/group/calico/Start 90.94
360 TestNetworkPlugins/group/flannel/DNS 0.19
361 TestNetworkPlugins/group/flannel/Localhost 0.17
362 TestNetworkPlugins/group/flannel/HairPin 0.14
363 TestNetworkPlugins/group/custom-flannel/Start 93.66
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.26
366 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
367 TestNetworkPlugins/group/bridge/NetCatPod 11.31
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
369 TestNetworkPlugins/group/bridge/DNS 0.19
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
371 TestNetworkPlugins/group/bridge/Localhost 0.14
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
373 TestNetworkPlugins/group/bridge/HairPin 0.14
374 TestNetworkPlugins/group/calico/ControllerPod 6.01
375 TestNetworkPlugins/group/calico/KubeletFlags 0.21
376 TestNetworkPlugins/group/calico/NetCatPod 11.21
377 TestNetworkPlugins/group/calico/DNS 0.15
378 TestNetworkPlugins/group/calico/Localhost 0.15
379 TestNetworkPlugins/group/calico/HairPin 0.14
380 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
381 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
382 TestNetworkPlugins/group/custom-flannel/DNS 0.15
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (27.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-944621 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-944621 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.289317904s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-944621
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-944621: exit status 85 (55.410592ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC |          |
	|         | -p download-only-944621        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:20:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:20:25.271166   11024 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:20:25.271416   11024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:20:25.271426   11024 out.go:304] Setting ErrFile to fd 2...
	I0719 14:20:25.271433   11024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:20:25.271611   11024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	W0719 14:20:25.271766   11024 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19302-3847/.minikube/config/config.json: open /home/jenkins/minikube-integration/19302-3847/.minikube/config/config.json: no such file or directory
	I0719 14:20:25.272321   11024 out.go:298] Setting JSON to true
	I0719 14:20:25.273138   11024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":171,"bootTime":1721398654,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:20:25.273195   11024 start.go:139] virtualization: kvm guest
	I0719 14:20:25.275472   11024 out.go:97] [download-only-944621] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0719 14:20:25.275554   11024 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 14:20:25.275585   11024 notify.go:220] Checking for updates...
	I0719 14:20:25.277019   11024 out.go:169] MINIKUBE_LOCATION=19302
	I0719 14:20:25.278435   11024 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:20:25.279795   11024 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:20:25.281172   11024 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:20:25.282656   11024 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 14:20:25.285069   11024 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 14:20:25.285268   11024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:20:25.382655   11024 out.go:97] Using the kvm2 driver based on user configuration
	I0719 14:20:25.382681   11024 start.go:297] selected driver: kvm2
	I0719 14:20:25.382696   11024 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:20:25.383030   11024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:20:25.383160   11024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:20:25.397194   11024 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:20:25.397249   11024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:20:25.397757   11024 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 14:20:25.397923   11024 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 14:20:25.397952   11024 cni.go:84] Creating CNI manager for ""
	I0719 14:20:25.397963   11024 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:20:25.397972   11024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 14:20:25.398033   11024 start.go:340] cluster config:
	{Name:download-only-944621 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-944621 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:20:25.398209   11024 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:20:25.400233   11024 out.go:97] Downloading VM boot image ...
	I0719 14:20:25.400262   11024 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 14:20:35.379522   11024 out.go:97] Starting "download-only-944621" primary control-plane node in "download-only-944621" cluster
	I0719 14:20:35.379547   11024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 14:20:35.493090   11024 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 14:20:35.493117   11024 cache.go:56] Caching tarball of preloaded images
	I0719 14:20:35.493334   11024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 14:20:35.495196   11024 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 14:20:35.495211   11024 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:20:35.607827   11024 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 14:20:48.362497   11024 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:20:48.362619   11024 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:20:49.264727   11024 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 14:20:49.265124   11024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/download-only-944621/config.json ...
	I0719 14:20:49.265163   11024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/download-only-944621/config.json: {Name:mk1310de6b5d40d93ef23932aaca02ee3a9268a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:20:49.265342   11024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 14:20:49.265520   11024 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-944621 host does not exist
	  To start a cluster, run: "minikube start -p download-only-944621"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-944621
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (13.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-905246 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-905246 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.825658292s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (13.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-905246
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-905246: exit status 85 (478.021426ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC |                     |
	|         | -p download-only-944621        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC | 19 Jul 24 14:20 UTC |
	| delete  | -p download-only-944621        | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC | 19 Jul 24 14:20 UTC |
	| start   | -o=json --download-only        | download-only-905246 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC |                     |
	|         | -p download-only-905246        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:20:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:20:52.867780   11300 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:20:52.867867   11300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:20:52.867874   11300 out.go:304] Setting ErrFile to fd 2...
	I0719 14:20:52.867878   11300 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:20:52.868060   11300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:20:52.868545   11300 out.go:298] Setting JSON to true
	I0719 14:20:52.869339   11300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":199,"bootTime":1721398654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:20:52.869391   11300 start.go:139] virtualization: kvm guest
	I0719 14:20:52.871698   11300 out.go:97] [download-only-905246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:20:52.871819   11300 notify.go:220] Checking for updates...
	I0719 14:20:52.873141   11300 out.go:169] MINIKUBE_LOCATION=19302
	I0719 14:20:52.874463   11300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:20:52.875884   11300 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:20:52.877042   11300 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:20:52.878388   11300 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 14:20:52.880773   11300 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 14:20:52.880957   11300 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:20:52.912451   11300 out.go:97] Using the kvm2 driver based on user configuration
	I0719 14:20:52.912472   11300 start.go:297] selected driver: kvm2
	I0719 14:20:52.912484   11300 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:20:52.912782   11300 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:20:52.912854   11300 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:20:52.927351   11300 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:20:52.927394   11300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:20:52.927851   11300 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 14:20:52.927989   11300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 14:20:52.928035   11300 cni.go:84] Creating CNI manager for ""
	I0719 14:20:52.928048   11300 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:20:52.928055   11300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 14:20:52.928101   11300 start.go:340] cluster config:
	{Name:download-only-905246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-905246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:20:52.928188   11300 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:20:52.929875   11300 out.go:97] Starting "download-only-905246" primary control-plane node in "download-only-905246" cluster
	I0719 14:20:52.929888   11300 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:20:53.038279   11300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 14:20:53.038302   11300 cache.go:56] Caching tarball of preloaded images
	I0719 14:20:53.038460   11300 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 14:20:53.040498   11300 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 14:20:53.040522   11300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:20:53.151824   11300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-905246 host does not exist
	  To start a cluster, run: "minikube start -p download-only-905246"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-905246
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (53.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-819425 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-819425 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.586878882s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (53.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-819425
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-819425: exit status 85 (56.971916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC |                     |
	|         | -p download-only-944621             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC | 19 Jul 24 14:20 UTC |
	| delete  | -p download-only-944621             | download-only-944621 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC | 19 Jul 24 14:20 UTC |
	| start   | -o=json --download-only             | download-only-905246 | jenkins | v1.33.1 | 19 Jul 24 14:20 UTC |                     |
	|         | -p download-only-905246             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 14:21 UTC | 19 Jul 24 14:21 UTC |
	| delete  | -p download-only-905246             | download-only-905246 | jenkins | v1.33.1 | 19 Jul 24 14:21 UTC | 19 Jul 24 14:21 UTC |
	| start   | -o=json --download-only             | download-only-819425 | jenkins | v1.33.1 | 19 Jul 24 14:21 UTC |                     |
	|         | -p download-only-819425             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 14:21:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 14:21:07.424401   11520 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:21:07.424508   11520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:21:07.424516   11520 out.go:304] Setting ErrFile to fd 2...
	I0719 14:21:07.424521   11520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:21:07.424691   11520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:21:07.425214   11520 out.go:298] Setting JSON to true
	I0719 14:21:07.426014   11520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":213,"bootTime":1721398654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:21:07.426072   11520 start.go:139] virtualization: kvm guest
	I0719 14:21:07.428207   11520 out.go:97] [download-only-819425] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:21:07.428334   11520 notify.go:220] Checking for updates...
	I0719 14:21:07.429720   11520 out.go:169] MINIKUBE_LOCATION=19302
	I0719 14:21:07.431253   11520 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:21:07.432414   11520 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:21:07.433669   11520 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:21:07.434788   11520 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 14:21:07.436945   11520 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 14:21:07.437151   11520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:21:07.468540   11520 out.go:97] Using the kvm2 driver based on user configuration
	I0719 14:21:07.468564   11520 start.go:297] selected driver: kvm2
	I0719 14:21:07.468576   11520 start.go:901] validating driver "kvm2" against <nil>
	I0719 14:21:07.468885   11520 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:21:07.468962   11520 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-3847/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 14:21:07.483314   11520 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 14:21:07.483366   11520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 14:21:07.483814   11520 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 14:21:07.483951   11520 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 14:21:07.484008   11520 cni.go:84] Creating CNI manager for ""
	I0719 14:21:07.484020   11520 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 14:21:07.484027   11520 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 14:21:07.484076   11520 start.go:340] cluster config:
	{Name:download-only-819425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-819425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:21:07.484156   11520 iso.go:125] acquiring lock: {Name:mka7ff476ebe5dea1005e82f43afe0b11587572f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 14:21:07.486153   11520 out.go:97] Starting "download-only-819425" primary control-plane node in "download-only-819425" cluster
	I0719 14:21:07.486173   11520 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 14:21:07.594821   11520 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 14:21:07.594847   11520 cache.go:56] Caching tarball of preloaded images
	I0719 14:21:07.595000   11520 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 14:21:07.596904   11520 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 14:21:07.596921   11520 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:21:07.706699   11520 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 14:21:19.072075   11520 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:21:19.072171   11520 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19302-3847/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 14:21:19.809343   11520 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0719 14:21:19.809667   11520 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/download-only-819425/config.json ...
	I0719 14:21:19.809695   11520 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/download-only-819425/config.json: {Name:mkb3059ef00747741860ce1c41751bd2b7616307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 14:21:19.809840   11520 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 14:21:19.809972   11520 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19302-3847/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-819425 host does not exist
	  To start a cluster, run: "minikube start -p download-only-819425"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-819425
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-598622 --alsologtostderr --binary-mirror http://127.0.0.1:46457 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-598622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-598622
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (122.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-687544 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-687544 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m1.026695718s)
helpers_test.go:175: Cleaning up "offline-crio-687544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-687544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-687544: (1.133566412s)
--- PASS: TestOffline (122.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-018825
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-018825: exit status 85 (48.005216ms)

                                                
                                                
-- stdout --
	* Profile "addons-018825" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-018825"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-018825
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-018825: exit status 85 (47.107691ms)

                                                
                                                
-- stdout --
	* Profile "addons-018825" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-018825"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (146.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-018825 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-018825 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.501084568s)
--- PASS: TestAddons/Setup (146.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 25.20019ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-k884k" [f109574c-299a-469d-94a4-ad81e51b9efa] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.044464568s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jq9hm" [90bf1ad6-3f9b-465b-aaa2-0d77bd8970a4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004874703s
addons_test.go:342: (dbg) Run:  kubectl --context addons-018825 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-018825 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-018825 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.861387693s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 ip
2024/07/19 14:24:44 [DEBUG] GET http://192.168.39.100:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (17.1s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hbjxz" [8db48ea2-7b84-4129-b2be-9dbe13115fd5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004296244s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-018825
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-018825: (12.094397396s)
--- PASS: TestAddons/parallel/InspektorGadget (17.10s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 23.333223ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-c8ct4" [f5d05cf3-2614-4ccf-9d6f-5afb52d9c031] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.045948771s
addons_test.go:475: (dbg) Run:  kubectl --context addons-018825 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-018825 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.832734865s)
addons_test.go:480: kubectl --context addons-018825 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: Internal error occurred: unable to upgrade connection: container helm-test not found in pod helm-test_kube-system
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (85.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 8.246787ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-018825 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-018825 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bd8bb86a-a2c9-4a47-ab05-06c7167ae1e4] Pending
helpers_test.go:344: "task-pv-pod" [bd8bb86a-a2c9-4a47-ab05-06c7167ae1e4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bd8bb86a-a2c9-4a47-ab05-06c7167ae1e4] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003906197s
addons_test.go:586: (dbg) Run:  kubectl --context addons-018825 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018825 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-018825 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-018825 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-018825 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-018825 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-018825 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [96cfac6f-b2b8-4d18-af59-ac7acd7ba117] Pending
helpers_test.go:344: "task-pv-pod-restore" [96cfac6f-b2b8-4d18-af59-ac7acd7ba117] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [96cfac6f-b2b8-4d18-af59-ac7acd7ba117] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003266116s
addons_test.go:628: (dbg) Run:  kubectl --context addons-018825 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-018825 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-018825 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-018825 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.793660917s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (85.15s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-018825 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-018825 --alsologtostderr -v=1: (1.027820028s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-zbbqp" [8b18ef56-46ef-41f8-a085-3840463e848b] Pending
helpers_test.go:344: "headlamp-7867546754-zbbqp" [8b18ef56-46ef-41f8-a085-3840463e848b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-zbbqp" [8b18ef56-46ef-41f8-a085-3840463e848b] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004175904s
--- PASS: TestAddons/parallel/Headlamp (13.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-k4vg8" [76bc110f-9f36-4d1b-a22d-4c703f5cd673] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006554983s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-018825
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-018825 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-018825 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8b7f43de-86d1-441e-8a96-a5882280f4dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8b7f43de-86d1-441e-8a96-a5882280f4dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8b7f43de-86d1-441e-8a96-a5882280f4dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003954975s
addons_test.go:992: (dbg) Run:  kubectl --context addons-018825 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 ssh "cat /opt/local-path-provisioner/pvc-b22e2d8b-ef50-4e0e-ac1c-eda671cc595d_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-018825 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-018825 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-018825 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-018825 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.304609258s)
--- PASS: TestAddons/parallel/LocalPath (56.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6bcnd" [ec6c8a36-43a7-42bd-bb5d-9840f023356c] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.070529396s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-018825
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.79s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-hw6vk" [2dad5a45-80c8-4d63-aadc-d2166af16dc0] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004376646s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-018825 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-018825 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (108.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-127438 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-127438 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m46.994540309s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-127438 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-127438 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-127438 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-127438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-127438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-127438: (1.067743887s)
--- PASS: TestCertOptions (108.58s)

                                                
                                    
x
+
TestCertExpiration (305.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-939600 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-939600 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.607121251s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-939600 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-939600 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.810413576s)
helpers_test.go:175: Cleaning up "cert-expiration-939600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-939600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-939600: (1.009122904s)
--- PASS: TestCertExpiration (305.43s)

                                                
                                    
x
+
TestForceSystemdFlag (74.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-632791 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-632791 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.771745416s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-632791 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-632791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-632791
--- PASS: TestForceSystemdFlag (74.77s)

                                                
                                    
x
+
TestForceSystemdEnv (62.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-802753 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0719 15:34:28.744159   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-802753 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.793290897s)
helpers_test.go:175: Cleaning up "force-systemd-env-802753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-802753
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-802753: (1.231210494s)
--- PASS: TestForceSystemdEnv (62.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.37s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.37s)

                                                
                                    
x
+
TestErrorSpam/setup (40.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-137648 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-137648 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-137648 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-137648 --driver=kvm2  --container-runtime=crio: (40.620071742s)
--- PASS: TestErrorSpam/setup (40.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (4.79s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 stop: (1.591234763s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 stop: (1.847602029s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-137648 --log_dir /tmp/nospam-137648 stop: (1.349692991s)
--- PASS: TestErrorSpam/stop (4.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19302-3847/.minikube/files/etc/test/nested/copy/11012/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-814991 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0719 14:34:28.744113   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:28.749696   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:28.760035   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:28.780484   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:28.820790   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:28.901114   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:29.061664   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:29.382299   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:30.023322   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:31.303837   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:33.864625   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:38.985669   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:34:49.226486   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:35:09.706690   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-814991 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m36.529955889s)
--- PASS: TestFunctional/serial/StartWithProxy (96.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-814991 --alsologtostderr -v=8
E0719 14:35:50.667156   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-814991 --alsologtostderr -v=8: (50.523132116s)
functional_test.go:659: soft start took 50.523722584s for "functional-814991" cluster.
--- PASS: TestFunctional/serial/SoftStart (50.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-814991 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 cache add registry.k8s.io/pause:3.3: (1.097988593s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 cache add registry.k8s.io/pause:latest: (1.024125343s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-814991 /tmp/TestFunctionalserialCacheCmdcacheadd_local1251675137/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cache add minikube-local-cache-test:functional-814991
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 cache add minikube-local-cache-test:functional-814991: (1.856225342s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cache delete minikube-local-cache-test:functional-814991
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-814991
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.348069ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 kubectl -- --context functional-814991 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-814991 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-814991 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0719 14:37:12.587706   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-814991 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.493835318s)
functional_test.go:757: restart took 34.493939792s for "functional-814991" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-814991 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 logs: (1.381782989s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 logs --file /tmp/TestFunctionalserialLogsFileCmd535642331/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 logs --file /tmp/TestFunctionalserialLogsFileCmd535642331/001/logs.txt: (1.409318945s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-814991 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-814991
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-814991: exit status 115 (265.318274ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.188:31764 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-814991 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-814991 delete -f testdata/invalidsvc.yaml: (1.407734413s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 config get cpus: exit status 14 (56.78554ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 config get cpus: exit status 14 (39.716768ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (45.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-814991 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-814991 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20314: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (45.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-814991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-814991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.746745ms)

                                                
                                                
-- stdout --
	* [functional-814991] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:37:30.967399   20207 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:37:30.967534   20207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:37:30.967544   20207 out.go:304] Setting ErrFile to fd 2...
	I0719 14:37:30.967551   20207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:37:30.967816   20207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:37:30.968491   20207 out.go:298] Setting JSON to false
	I0719 14:37:30.969636   20207 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1197,"bootTime":1721398654,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:37:30.969712   20207 start.go:139] virtualization: kvm guest
	I0719 14:37:30.971828   20207 out.go:177] * [functional-814991] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 14:37:30.973758   20207 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:37:30.973762   20207 notify.go:220] Checking for updates...
	I0719 14:37:30.975252   20207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:37:30.976661   20207 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:37:30.977995   20207 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:37:30.979233   20207 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:37:30.980497   20207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:37:30.982289   20207 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:37:30.982883   20207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:37:30.982939   20207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:37:30.997512   20207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0719 14:37:30.997913   20207 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:37:30.998441   20207 main.go:141] libmachine: Using API Version  1
	I0719 14:37:30.998462   20207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:37:30.998912   20207 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:37:30.999151   20207 main.go:141] libmachine: (functional-814991) Calling .DriverName
	I0719 14:37:30.999440   20207 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:37:30.999768   20207 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:37:30.999813   20207 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:37:31.014451   20207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
	I0719 14:37:31.014814   20207 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:37:31.015294   20207 main.go:141] libmachine: Using API Version  1
	I0719 14:37:31.015318   20207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:37:31.015616   20207 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:37:31.015923   20207 main.go:141] libmachine: (functional-814991) Calling .DriverName
	I0719 14:37:31.047987   20207 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 14:37:31.049280   20207 start.go:297] selected driver: kvm2
	I0719 14:37:31.049305   20207 start.go:901] validating driver "kvm2" against &{Name:functional-814991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-814991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.188 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:37:31.049410   20207 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:37:31.051587   20207 out.go:177] 
	W0719 14:37:31.052963   20207 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 14:37:31.054184   20207 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-814991 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-814991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-814991 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.939668ms)

                                                
                                                
-- stdout --
	* [functional-814991] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 14:37:30.827062   20174 out.go:291] Setting OutFile to fd 1 ...
	I0719 14:37:30.827187   20174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:37:30.827197   20174 out.go:304] Setting ErrFile to fd 2...
	I0719 14:37:30.827202   20174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 14:37:30.827459   20174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 14:37:30.828011   20174 out.go:298] Setting JSON to false
	I0719 14:37:30.829094   20174 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1197,"bootTime":1721398654,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 14:37:30.829156   20174 start.go:139] virtualization: kvm guest
	I0719 14:37:30.831689   20174 out.go:177] * [functional-814991] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0719 14:37:30.833344   20174 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 14:37:30.833359   20174 notify.go:220] Checking for updates...
	I0719 14:37:30.836100   20174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 14:37:30.837371   20174 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 14:37:30.838710   20174 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 14:37:30.840010   20174 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 14:37:30.841422   20174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 14:37:30.842985   20174 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 14:37:30.843428   20174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:37:30.843470   20174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:37:30.862789   20174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0719 14:37:30.863213   20174 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:37:30.863816   20174 main.go:141] libmachine: Using API Version  1
	I0719 14:37:30.863836   20174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:37:30.864240   20174 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:37:30.864432   20174 main.go:141] libmachine: (functional-814991) Calling .DriverName
	I0719 14:37:30.864667   20174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 14:37:30.865016   20174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 14:37:30.865052   20174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 14:37:30.879429   20174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42311
	I0719 14:37:30.879872   20174 main.go:141] libmachine: () Calling .GetVersion
	I0719 14:37:30.880418   20174 main.go:141] libmachine: Using API Version  1
	I0719 14:37:30.880447   20174 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 14:37:30.880740   20174 main.go:141] libmachine: () Calling .GetMachineName
	I0719 14:37:30.880900   20174 main.go:141] libmachine: (functional-814991) Calling .DriverName
	I0719 14:37:30.914889   20174 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0719 14:37:30.916248   20174 start.go:297] selected driver: kvm2
	I0719 14:37:30.916267   20174 start.go:901] validating driver "kvm2" against &{Name:functional-814991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-814991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.188 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 14:37:30.916400   20174 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 14:37:30.919793   20174 out.go:177] 
	W0719 14:37:30.921230   20174 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 14:37:30.922452   20174 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-814991 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-814991 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-g5bcm" [1a89cf76-fd6e-4475-9821-b92a42654e28] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-g5bcm" [1a89cf76-fd6e-4475-9821-b92a42654e28] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004178042s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.188:32022
functional_test.go:1671: http://192.168.50.188:32022: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-g5bcm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.188:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.188:32022
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [05c2d3c7-830b-4349-9c2e-e9c34e8faa2c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00399s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-814991 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-814991 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-814991 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-814991 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-814991 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e888d19b-75d4-4f2c-9205-79b5978c5b52] Pending
helpers_test.go:344: "sp-pod" [e888d19b-75d4-4f2c-9205-79b5978c5b52] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e888d19b-75d4-4f2c-9205-79b5978c5b52] Running
2024/07/19 14:38:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004112221s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-814991 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-814991 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-814991 delete -f testdata/storage-provisioner/pod.yaml: (1.1384806s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-814991 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e436ae45-1e92-4530-aa1b-d1cba500cc7e] Pending
helpers_test.go:344: "sp-pod" [e436ae45-1e92-4530-aa1b-d1cba500cc7e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e436ae45-1e92-4530-aa1b-d1cba500cc7e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005874641s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-814991 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh -n functional-814991 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cp functional-814991:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1801018993/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh -n functional-814991 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh -n functional-814991 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-814991 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-dqcr9" [0f1cc1c9-5529-4482-a6a5-42889161aba5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-dqcr9" [0f1cc1c9-5529-4482-a6a5-42889161aba5] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.004762922s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-814991 exec mysql-64454c8b5c-dqcr9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-814991 exec mysql-64454c8b5c-dqcr9 -- mysql -ppassword -e "show databases;": exit status 1 (181.56011ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-814991 exec mysql-64454c8b5c-dqcr9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-814991 exec mysql-64454c8b5c-dqcr9 -- mysql -ppassword -e "show databases;": exit status 1 (150.311085ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-814991 exec mysql-64454c8b5c-dqcr9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11012/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /etc/test/nested/copy/11012/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11012.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /etc/ssl/certs/11012.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11012.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /usr/share/ca-certificates/11012.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/110122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /etc/ssl/certs/110122.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/110122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /usr/share/ca-certificates/110122.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-814991 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh "sudo systemctl is-active docker": exit status 1 (216.380393ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh "sudo systemctl is-active containerd": exit status 1 (225.606162ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-814991 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-814991 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-dncpc" [13518e8c-aeb1-4c4e-a1b6-f4ee20aba2dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-dncpc" [13518e8c-aeb1-4c4e-a1b6-f4ee20aba2dc] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004301885s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (33.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdany-port966659717/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721399849143282237" to /tmp/TestFunctionalparallelMountCmdany-port966659717/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721399849143282237" to /tmp/TestFunctionalparallelMountCmdany-port966659717/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721399849143282237" to /tmp/TestFunctionalparallelMountCmdany-port966659717/001/test-1721399849143282237
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.91651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 14:37 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 14:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 14:37 test-1721399849143282237
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh cat /mount-9p/test-1721399849143282237
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-814991 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4f12af73-c007-4810-96da-fbac96526855] Pending
helpers_test.go:344: "busybox-mount" [4f12af73-c007-4810-96da-fbac96526855] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4f12af73-c007-4810-96da-fbac96526855] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4f12af73-c007-4810-96da-fbac96526855] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 31.003344924s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-814991 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdany-port966659717/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (33.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "286.053378ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "42.981243ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "210.550466ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "40.497545ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 service list -o json
functional_test.go:1490: Took "922.456152ms" to run "out/minikube-linux-amd64 -p functional-814991 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.188:31592
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.188:31592
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-814991 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-814991
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-814991
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-814991 image ls --format short --alsologtostderr:
I0719 14:38:06.860596   22120 out.go:291] Setting OutFile to fd 1 ...
I0719 14:38:06.860713   22120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:06.860723   22120 out.go:304] Setting ErrFile to fd 2...
I0719 14:38:06.860729   22120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:06.860990   22120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
I0719 14:38:06.861730   22120 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:06.861880   22120 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:06.862459   22120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:06.862516   22120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:06.878661   22120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43811
I0719 14:38:06.879273   22120 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:06.879930   22120 main.go:141] libmachine: Using API Version  1
I0719 14:38:06.879990   22120 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:06.880375   22120 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:06.880607   22120 main.go:141] libmachine: (functional-814991) Calling .GetState
I0719 14:38:06.882612   22120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:06.882660   22120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:06.897888   22120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
I0719 14:38:06.898312   22120 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:06.898800   22120 main.go:141] libmachine: Using API Version  1
I0719 14:38:06.898815   22120 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:06.899096   22120 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:06.899258   22120 main.go:141] libmachine: (functional-814991) Calling .DriverName
I0719 14:38:06.899434   22120 ssh_runner.go:195] Run: systemctl --version
I0719 14:38:06.899457   22120 main.go:141] libmachine: (functional-814991) Calling .GetSSHHostname
I0719 14:38:06.902328   22120 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:06.902677   22120 main.go:141] libmachine: (functional-814991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:e2:d8", ip: ""} in network mk-functional-814991: {Iface:virbr1 ExpiryTime:2024-07-19 15:34:26 +0000 UTC Type:0 Mac:52:54:00:ba:e2:d8 Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:functional-814991 Clientid:01:52:54:00:ba:e2:d8}
I0719 14:38:06.902711   22120 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined IP address 192.168.50.188 and MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:06.902788   22120 main.go:141] libmachine: (functional-814991) Calling .GetSSHPort
I0719 14:38:06.902953   22120 main.go:141] libmachine: (functional-814991) Calling .GetSSHKeyPath
I0719 14:38:06.903101   22120 main.go:141] libmachine: (functional-814991) Calling .GetSSHUsername
I0719 14:38:06.903235   22120 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/functional-814991/id_rsa Username:docker}
I0719 14:38:07.029415   22120 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 14:38:07.176874   22120 main.go:141] libmachine: Making call to close driver server
I0719 14:38:07.176892   22120 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:07.177276   22120 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:07.177320   22120 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 14:38:07.177331   22120 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
I0719 14:38:07.177338   22120 main.go:141] libmachine: Making call to close driver server
I0719 14:38:07.177444   22120 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:07.177657   22120 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:07.177684   22120 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-814991 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kicbase/echo-server           | functional-814991  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| localhost/minikube-local-cache-test     | functional-814991  | 4cd32bcb8a35e | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-814991 image ls --format table --alsologtostderr:
I0719 14:38:08.082822   22259 out.go:291] Setting OutFile to fd 1 ...
I0719 14:38:08.082946   22259 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:08.082956   22259 out.go:304] Setting ErrFile to fd 2...
I0719 14:38:08.082960   22259 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:08.083160   22259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
I0719 14:38:08.083810   22259 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:08.083954   22259 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:08.084495   22259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:08.084577   22259 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:08.100589   22259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
I0719 14:38:08.101019   22259 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:08.101599   22259 main.go:141] libmachine: Using API Version  1
I0719 14:38:08.101635   22259 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:08.101996   22259 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:08.102262   22259 main.go:141] libmachine: (functional-814991) Calling .GetState
I0719 14:38:08.104397   22259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:08.104484   22259 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:08.119821   22259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
I0719 14:38:08.120276   22259 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:08.120871   22259 main.go:141] libmachine: Using API Version  1
I0719 14:38:08.120905   22259 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:08.121218   22259 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:08.121437   22259 main.go:141] libmachine: (functional-814991) Calling .DriverName
I0719 14:38:08.121692   22259 ssh_runner.go:195] Run: systemctl --version
I0719 14:38:08.121728   22259 main.go:141] libmachine: (functional-814991) Calling .GetSSHHostname
I0719 14:38:08.124718   22259 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:08.125090   22259 main.go:141] libmachine: (functional-814991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:e2:d8", ip: ""} in network mk-functional-814991: {Iface:virbr1 ExpiryTime:2024-07-19 15:34:26 +0000 UTC Type:0 Mac:52:54:00:ba:e2:d8 Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:functional-814991 Clientid:01:52:54:00:ba:e2:d8}
I0719 14:38:08.125129   22259 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined IP address 192.168.50.188 and MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:08.125269   22259 main.go:141] libmachine: (functional-814991) Calling .GetSSHPort
I0719 14:38:08.125443   22259 main.go:141] libmachine: (functional-814991) Calling .GetSSHKeyPath
I0719 14:38:08.125622   22259 main.go:141] libmachine: (functional-814991) Calling .GetSSHUsername
I0719 14:38:08.125769   22259 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/functional-814991/id_rsa Username:docker}
I0719 14:38:08.245254   22259 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 14:38:08.302272   22259 main.go:141] libmachine: Making call to close driver server
I0719 14:38:08.302292   22259 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:08.302532   22259 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:08.302553   22259 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 14:38:08.302565   22259 main.go:141] libmachine: Making call to close driver server
I0719 14:38:08.302580   22259 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:08.302800   22259 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:08.302819   22259 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 14:38:08.302825   22259 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-814991 image ls --format json --alsologtostderr:
[{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:
d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-814991"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927a
c287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4cd32bcb8a35e25af8dc849a9a4d309e96ac62c93bf07063da02d397d8c6c2b0","repoDigests":["localhost/minikube-local-cache-test@sha256:d65b55243204e93823f3b2bf64d5447ee3b8ac95d1ec1b53987ee4b52223c04c"],"repoTags":["localhost/minikube-local-cache-test:functional-814991"],"size":"3330"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e
4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb
01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["reg
istry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc1
40254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-814991 image ls --format json --alsologtostderr:
I0719 14:38:07.743207   22213 out.go:291] Setting OutFile to fd 1 ...
I0719 14:38:07.743488   22213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:07.743501   22213 out.go:304] Setting ErrFile to fd 2...
I0719 14:38:07.743508   22213 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:07.743852   22213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
I0719 14:38:07.744659   22213 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:07.744948   22213 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:07.745631   22213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:07.745680   22213 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:07.763269   22213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33595
I0719 14:38:07.763709   22213 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:07.764433   22213 main.go:141] libmachine: Using API Version  1
I0719 14:38:07.764467   22213 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:07.764878   22213 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:07.765091   22213 main.go:141] libmachine: (functional-814991) Calling .GetState
I0719 14:38:07.767149   22213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:07.767194   22213 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:07.781878   22213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
I0719 14:38:07.782371   22213 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:07.782988   22213 main.go:141] libmachine: Using API Version  1
I0719 14:38:07.783012   22213 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:07.783312   22213 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:07.783487   22213 main.go:141] libmachine: (functional-814991) Calling .DriverName
I0719 14:38:07.783669   22213 ssh_runner.go:195] Run: systemctl --version
I0719 14:38:07.783699   22213 main.go:141] libmachine: (functional-814991) Calling .GetSSHHostname
I0719 14:38:07.786546   22213 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:07.787135   22213 main.go:141] libmachine: (functional-814991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:e2:d8", ip: ""} in network mk-functional-814991: {Iface:virbr1 ExpiryTime:2024-07-19 15:34:26 +0000 UTC Type:0 Mac:52:54:00:ba:e2:d8 Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:functional-814991 Clientid:01:52:54:00:ba:e2:d8}
I0719 14:38:07.787160   22213 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined IP address 192.168.50.188 and MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:07.787251   22213 main.go:141] libmachine: (functional-814991) Calling .GetSSHPort
I0719 14:38:07.787390   22213 main.go:141] libmachine: (functional-814991) Calling .GetSSHKeyPath
I0719 14:38:07.787533   22213 main.go:141] libmachine: (functional-814991) Calling .GetSSHUsername
I0719 14:38:07.787716   22213 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/functional-814991/id_rsa Username:docker}
I0719 14:38:07.908193   22213 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 14:38:08.028687   22213 main.go:141] libmachine: Making call to close driver server
I0719 14:38:08.028704   22213 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:08.028994   22213 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
I0719 14:38:08.029000   22213 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:08.029018   22213 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 14:38:08.029027   22213 main.go:141] libmachine: Making call to close driver server
I0719 14:38:08.029035   22213 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:08.029287   22213 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
I0719 14:38:08.029302   22213 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:08.029327   22213 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-814991 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 4cd32bcb8a35e25af8dc849a9a4d309e96ac62c93bf07063da02d397d8c6c2b0
repoDigests:
- localhost/minikube-local-cache-test@sha256:d65b55243204e93823f3b2bf64d5447ee3b8ac95d1ec1b53987ee4b52223c04c
repoTags:
- localhost/minikube-local-cache-test:functional-814991
size: "3330"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-814991
size: "4943877"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-814991 image ls --format yaml --alsologtostderr:
I0719 14:38:07.231444   22143 out.go:291] Setting OutFile to fd 1 ...
I0719 14:38:07.231567   22143 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:07.231579   22143 out.go:304] Setting ErrFile to fd 2...
I0719 14:38:07.231593   22143 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:07.232199   22143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
I0719 14:38:07.233001   22143 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:07.233157   22143 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:07.233730   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:07.233786   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:07.248754   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
I0719 14:38:07.249205   22143 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:07.249788   22143 main.go:141] libmachine: Using API Version  1
I0719 14:38:07.249806   22143 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:07.250127   22143 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:07.250322   22143 main.go:141] libmachine: (functional-814991) Calling .GetState
I0719 14:38:07.252229   22143 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:07.252277   22143 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:07.267581   22143 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
I0719 14:38:07.268030   22143 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:07.268656   22143 main.go:141] libmachine: Using API Version  1
I0719 14:38:07.268688   22143 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:07.269032   22143 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:07.269210   22143 main.go:141] libmachine: (functional-814991) Calling .DriverName
I0719 14:38:07.269406   22143 ssh_runner.go:195] Run: systemctl --version
I0719 14:38:07.269443   22143 main.go:141] libmachine: (functional-814991) Calling .GetSSHHostname
I0719 14:38:07.272427   22143 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:07.272846   22143 main.go:141] libmachine: (functional-814991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:e2:d8", ip: ""} in network mk-functional-814991: {Iface:virbr1 ExpiryTime:2024-07-19 15:34:26 +0000 UTC Type:0 Mac:52:54:00:ba:e2:d8 Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:functional-814991 Clientid:01:52:54:00:ba:e2:d8}
I0719 14:38:07.272870   22143 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined IP address 192.168.50.188 and MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:07.273053   22143 main.go:141] libmachine: (functional-814991) Calling .GetSSHPort
I0719 14:38:07.273226   22143 main.go:141] libmachine: (functional-814991) Calling .GetSSHKeyPath
I0719 14:38:07.273383   22143 main.go:141] libmachine: (functional-814991) Calling .GetSSHUsername
I0719 14:38:07.273542   22143 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/functional-814991/id_rsa Username:docker}
I0719 14:38:07.393371   22143 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 14:38:07.667981   22143 main.go:141] libmachine: Making call to close driver server
I0719 14:38:07.667999   22143 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:07.668268   22143 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:07.668285   22143 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 14:38:07.668302   22143 main.go:141] libmachine: Making call to close driver server
I0719 14:38:07.668310   22143 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:07.669640   22143 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
I0719 14:38:07.669700   22143 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:07.669752   22143 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh pgrep buildkitd: exit status 1 (285.84917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image build -t localhost/my-image:functional-814991 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 image build -t localhost/my-image:functional-814991 testdata/build --alsologtostderr: (4.818034908s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-814991 image build -t localhost/my-image:functional-814991 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4711985ad60
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-814991
--> 8b4d550f1ab
Successfully tagged localhost/my-image:functional-814991
8b4d550f1ab1a8a2ea640b30bfebba1fa9f1ccc214f0323210d54459398d2fa9
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-814991 image build -t localhost/my-image:functional-814991 testdata/build --alsologtostderr:
I0719 14:38:07.760547   22221 out.go:291] Setting OutFile to fd 1 ...
I0719 14:38:07.760698   22221 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:07.760721   22221 out.go:304] Setting ErrFile to fd 2...
I0719 14:38:07.760734   22221 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 14:38:07.760938   22221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
I0719 14:38:07.761534   22221 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:07.762095   22221 config.go:182] Loaded profile config "functional-814991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 14:38:07.762506   22221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:07.762566   22221 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:07.778328   22221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41261
I0719 14:38:07.778815   22221 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:07.779406   22221 main.go:141] libmachine: Using API Version  1
I0719 14:38:07.779427   22221 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:07.779779   22221 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:07.779990   22221 main.go:141] libmachine: (functional-814991) Calling .GetState
I0719 14:38:07.781861   22221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 14:38:07.781913   22221 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 14:38:07.797153   22221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46831
I0719 14:38:07.797529   22221 main.go:141] libmachine: () Calling .GetVersion
I0719 14:38:07.797930   22221 main.go:141] libmachine: Using API Version  1
I0719 14:38:07.797953   22221 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 14:38:07.798350   22221 main.go:141] libmachine: () Calling .GetMachineName
I0719 14:38:07.798528   22221 main.go:141] libmachine: (functional-814991) Calling .DriverName
I0719 14:38:07.798733   22221 ssh_runner.go:195] Run: systemctl --version
I0719 14:38:07.798767   22221 main.go:141] libmachine: (functional-814991) Calling .GetSSHHostname
I0719 14:38:07.801441   22221 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:07.801790   22221 main.go:141] libmachine: (functional-814991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:e2:d8", ip: ""} in network mk-functional-814991: {Iface:virbr1 ExpiryTime:2024-07-19 15:34:26 +0000 UTC Type:0 Mac:52:54:00:ba:e2:d8 Iaid: IPaddr:192.168.50.188 Prefix:24 Hostname:functional-814991 Clientid:01:52:54:00:ba:e2:d8}
I0719 14:38:07.801863   22221 main.go:141] libmachine: (functional-814991) DBG | domain functional-814991 has defined IP address 192.168.50.188 and MAC address 52:54:00:ba:e2:d8 in network mk-functional-814991
I0719 14:38:07.802041   22221 main.go:141] libmachine: (functional-814991) Calling .GetSSHPort
I0719 14:38:07.802175   22221 main.go:141] libmachine: (functional-814991) Calling .GetSSHKeyPath
I0719 14:38:07.802316   22221 main.go:141] libmachine: (functional-814991) Calling .GetSSHUsername
I0719 14:38:07.802433   22221 sshutil.go:53] new ssh client: &{IP:192.168.50.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/functional-814991/id_rsa Username:docker}
I0719 14:38:07.928469   22221 build_images.go:161] Building image from path: /tmp/build.918662104.tar
I0719 14:38:07.928538   22221 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 14:38:07.964621   22221 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.918662104.tar
I0719 14:38:07.980007   22221 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.918662104.tar: stat -c "%s %y" /var/lib/minikube/build/build.918662104.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.918662104.tar': No such file or directory
I0719 14:38:07.980042   22221 ssh_runner.go:362] scp /tmp/build.918662104.tar --> /var/lib/minikube/build/build.918662104.tar (3072 bytes)
I0719 14:38:08.067253   22221 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.918662104
I0719 14:38:08.099260   22221 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.918662104 -xf /var/lib/minikube/build/build.918662104.tar
I0719 14:38:08.111167   22221 crio.go:315] Building image: /var/lib/minikube/build/build.918662104
I0719 14:38:08.111243   22221 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-814991 /var/lib/minikube/build/build.918662104 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0719 14:38:12.503840   22221 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-814991 /var/lib/minikube/build/build.918662104 --cgroup-manager=cgroupfs: (4.392552217s)
I0719 14:38:12.503943   22221 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.918662104
I0719 14:38:12.515288   22221 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.918662104.tar
I0719 14:38:12.526394   22221 build_images.go:217] Built localhost/my-image:functional-814991 from /tmp/build.918662104.tar
I0719 14:38:12.526433   22221 build_images.go:133] succeeded building to: functional-814991
I0719 14:38:12.526438   22221 build_images.go:134] failed building to: 
I0719 14:38:12.526463   22221 main.go:141] libmachine: Making call to close driver server
I0719 14:38:12.526476   22221 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:12.526791   22221 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
I0719 14:38:12.526802   22221 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:12.526833   22221 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 14:38:12.526851   22221 main.go:141] libmachine: Making call to close driver server
I0719 14:38:12.526860   22221 main.go:141] libmachine: (functional-814991) Calling .Close
I0719 14:38:12.527079   22221 main.go:141] libmachine: Successfully made call to close driver server
I0719 14:38:12.527098   22221 main.go:141] libmachine: (functional-814991) DBG | Closing plugin on server side
I0719 14:38:12.527103   22221 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.987835385s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-814991
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image load --daemon docker.io/kicbase/echo-server:functional-814991 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-814991 image load --daemon docker.io/kicbase/echo-server:functional-814991 --alsologtostderr: (1.198150425s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image load --daemon docker.io/kicbase/echo-server:functional-814991 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-814991
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image load --daemon docker.io/kicbase/echo-server:functional-814991 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image save docker.io/kicbase/echo-server:functional-814991 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image rm docker.io/kicbase/echo-server:functional-814991 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-814991
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 image save --daemon docker.io/kicbase/echo-server:functional-814991 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-814991
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdspecific-port3895388743/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (198.484245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdspecific-port3895388743/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-814991 ssh "sudo umount -f /mount-9p": exit status 1 (181.22037ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-814991 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdspecific-port3895388743/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup924517684/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup924517684/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup924517684/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-814991 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-814991 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup924517684/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup924517684/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-814991 /tmp/TestFunctionalparallelMountCmdVerifyCleanup924517684/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-814991
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-814991
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-814991
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (270.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-999305 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 14:39:28.744265   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:39:56.428052   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 14:42:29.032391   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.037750   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.048057   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.068351   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.108693   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.189092   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.349491   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:29.670063   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:30.310969   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:31.592161   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:34.152392   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:39.273537   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:42:49.514455   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-999305 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m30.29667083s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (270.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-999305 -- rollout status deployment/busybox: (5.529434007s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-2rfw6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-6kcdj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-pcfwd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-2rfw6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-6kcdj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-pcfwd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-2rfw6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-6kcdj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-pcfwd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-2rfw6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-2rfw6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-6kcdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-6kcdj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-pcfwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-999305 -- exec busybox-fc5497c4f-pcfwd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-999305 -v=7 --alsologtostderr
E0719 14:43:09.995484   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:43:50.955942   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-999305 -v=7 --alsologtostderr: (58.327170809s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-999305 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp testdata/cp-test.txt ha-999305:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305:/home/docker/cp-test.txt ha-999305-m02:/home/docker/cp-test_ha-999305_ha-999305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test_ha-999305_ha-999305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305:/home/docker/cp-test.txt ha-999305-m03:/home/docker/cp-test_ha-999305_ha-999305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test_ha-999305_ha-999305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305:/home/docker/cp-test.txt ha-999305-m04:/home/docker/cp-test_ha-999305_ha-999305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test_ha-999305_ha-999305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp testdata/cp-test.txt ha-999305-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m02:/home/docker/cp-test.txt ha-999305:/home/docker/cp-test_ha-999305-m02_ha-999305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test_ha-999305-m02_ha-999305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m02:/home/docker/cp-test.txt ha-999305-m03:/home/docker/cp-test_ha-999305-m02_ha-999305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test_ha-999305-m02_ha-999305-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m02:/home/docker/cp-test.txt ha-999305-m04:/home/docker/cp-test_ha-999305-m02_ha-999305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test_ha-999305-m02_ha-999305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp testdata/cp-test.txt ha-999305-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt ha-999305:/home/docker/cp-test_ha-999305-m03_ha-999305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test_ha-999305-m03_ha-999305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt ha-999305-m02:/home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test_ha-999305-m03_ha-999305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m03:/home/docker/cp-test.txt ha-999305-m04:/home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test_ha-999305-m03_ha-999305-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp testdata/cp-test.txt ha-999305-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile162641532/001/cp-test_ha-999305-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt ha-999305:/home/docker/cp-test_ha-999305-m04_ha-999305.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305 "sudo cat /home/docker/cp-test_ha-999305-m04_ha-999305.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt ha-999305-m02:/home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m02 "sudo cat /home/docker/cp-test_ha-999305-m04_ha-999305-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 cp ha-999305-m04:/home/docker/cp-test.txt ha-999305-m03:/home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 ssh -n ha-999305-m03 "sudo cat /home/docker/cp-test_ha-999305-m04_ha-999305-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.478980103s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-999305 node delete m03 -v=7 --alsologtostderr: (16.332544062s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (353.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-999305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 14:57:29.032028   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:58:52.079624   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 14:59:28.744114   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 15:02:29.031804   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-999305 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.174247321s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (353.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-999305 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-999305 --control-plane -v=7 --alsologtostderr: (1m13.90881185s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-999305 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-935015 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0719 15:04:28.744802   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-935015 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (54.770554686s)
--- PASS: TestJSONOutput/start/Command (54.77s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-935015 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-935015 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-935015 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-935015 --output=json --user=testUser: (7.370268883s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-325463 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-325463 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.153371ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cc6030c1-b788-472a-9bd4-4579824072b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-325463] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"253a5aeb-2e80-44aa-9859-ce69db3e4dff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"8eda564f-fda0-476c-bc19-37edbba48ea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a79ae68a-c339-4f6c-ad95-d6bb434f1a4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig"}}
	{"specversion":"1.0","id":"6d3a95cc-d87a-4ce1-bffd-435dc90dcdce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube"}}
	{"specversion":"1.0","id":"7668059e-04b6-47d2-aaf6-7e8b4eae26d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"09d128da-f845-42d3-ab51-630eaaa383e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89174124-4d99-4f0b-ab20-6bac7dff1154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-325463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-325463
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-125729 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-125729 --driver=kvm2  --container-runtime=crio: (38.466114229s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-134608 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-134608 --driver=kvm2  --container-runtime=crio: (42.805854602s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-125729
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-134608
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-134608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-134608
helpers_test.go:175: Cleaning up "first-125729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-125729
--- PASS: TestMinikubeProfile (83.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-818090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-818090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.345863347s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-818090 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-818090 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-835635 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-835635 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.383593145s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835635 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835635 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-818090 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835635 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835635 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-835635
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-835635: (1.269577861s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-835635
E0719 15:07:29.031881   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 15:07:31.792739   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-835635: (22.648156823s)
--- PASS: TestMountStart/serial/RestartStopped (23.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835635 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-835635 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-121443 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 15:09:28.744712   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-121443 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.325821224s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-121443 -- rollout status deployment/busybox: (4.980945538s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-9h6kk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-q8qnw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-9h6kk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-q8qnw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-9h6kk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-q8qnw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-9h6kk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-9h6kk -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-q8qnw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-121443 -- exec busybox-fc5497c4f-q8qnw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-121443 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-121443 -v 3 --alsologtostderr: (53.807892772s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.34s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-121443 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp testdata/cp-test.txt multinode-121443:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4276887194/001/cp-test_multinode-121443.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443:/home/docker/cp-test.txt multinode-121443-m02:/home/docker/cp-test_multinode-121443_multinode-121443-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m02 "sudo cat /home/docker/cp-test_multinode-121443_multinode-121443-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443:/home/docker/cp-test.txt multinode-121443-m03:/home/docker/cp-test_multinode-121443_multinode-121443-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m03 "sudo cat /home/docker/cp-test_multinode-121443_multinode-121443-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp testdata/cp-test.txt multinode-121443-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4276887194/001/cp-test_multinode-121443-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt multinode-121443:/home/docker/cp-test_multinode-121443-m02_multinode-121443.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443 "sudo cat /home/docker/cp-test_multinode-121443-m02_multinode-121443.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443-m02:/home/docker/cp-test.txt multinode-121443-m03:/home/docker/cp-test_multinode-121443-m02_multinode-121443-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m03 "sudo cat /home/docker/cp-test_multinode-121443-m02_multinode-121443-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp testdata/cp-test.txt multinode-121443-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4276887194/001/cp-test_multinode-121443-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt multinode-121443:/home/docker/cp-test_multinode-121443-m03_multinode-121443.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443 "sudo cat /home/docker/cp-test_multinode-121443-m03_multinode-121443.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 cp multinode-121443-m03:/home/docker/cp-test.txt multinode-121443-m02:/home/docker/cp-test_multinode-121443-m03_multinode-121443-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 ssh -n multinode-121443-m02 "sudo cat /home/docker/cp-test_multinode-121443-m03_multinode-121443-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-121443 node stop m03: (1.37275214s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-121443 status: exit status 7 (413.779342ms)

                                                
                                                
-- stdout --
	multinode-121443
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-121443-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-121443-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-121443 status --alsologtostderr: exit status 7 (418.996667ms)

                                                
                                                
-- stdout --
	multinode-121443
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-121443-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-121443-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:10:47.218351   39998 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:10:47.218808   39998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:10:47.218823   39998 out.go:304] Setting ErrFile to fd 2...
	I0719 15:10:47.218829   39998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:10:47.219230   39998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:10:47.219675   39998 out.go:298] Setting JSON to false
	I0719 15:10:47.219709   39998 mustload.go:65] Loading cluster: multinode-121443
	I0719 15:10:47.219791   39998 notify.go:220] Checking for updates...
	I0719 15:10:47.220098   39998 config.go:182] Loaded profile config "multinode-121443": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:10:47.220114   39998 status.go:255] checking status of multinode-121443 ...
	I0719 15:10:47.220460   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.220524   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.240697   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0719 15:10:47.241050   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.241715   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.241781   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.242081   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.242305   39998 main.go:141] libmachine: (multinode-121443) Calling .GetState
	I0719 15:10:47.243808   39998 status.go:330] multinode-121443 host status = "Running" (err=<nil>)
	I0719 15:10:47.243822   39998 host.go:66] Checking if "multinode-121443" exists ...
	I0719 15:10:47.244096   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.244127   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.259066   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36893
	I0719 15:10:47.259388   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.259782   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.259799   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.260085   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.260260   39998 main.go:141] libmachine: (multinode-121443) Calling .GetIP
	I0719 15:10:47.262720   39998 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:10:47.263132   39998 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:10:47.263163   39998 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:10:47.263255   39998 host.go:66] Checking if "multinode-121443" exists ...
	I0719 15:10:47.263521   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.263567   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.277816   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0719 15:10:47.278261   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.278657   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.278681   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.279025   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.279184   39998 main.go:141] libmachine: (multinode-121443) Calling .DriverName
	I0719 15:10:47.279371   39998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 15:10:47.279396   39998 main.go:141] libmachine: (multinode-121443) Calling .GetSSHHostname
	I0719 15:10:47.281733   39998 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:10:47.282112   39998 main.go:141] libmachine: (multinode-121443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:15:fd", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:07:53 +0000 UTC Type:0 Mac:52:54:00:b0:15:fd Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:multinode-121443 Clientid:01:52:54:00:b0:15:fd}
	I0719 15:10:47.282146   39998 main.go:141] libmachine: (multinode-121443) DBG | domain multinode-121443 has defined IP address 192.168.39.32 and MAC address 52:54:00:b0:15:fd in network mk-multinode-121443
	I0719 15:10:47.282322   39998 main.go:141] libmachine: (multinode-121443) Calling .GetSSHPort
	I0719 15:10:47.282476   39998 main.go:141] libmachine: (multinode-121443) Calling .GetSSHKeyPath
	I0719 15:10:47.282663   39998 main.go:141] libmachine: (multinode-121443) Calling .GetSSHUsername
	I0719 15:10:47.282809   39998 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443/id_rsa Username:docker}
	I0719 15:10:47.369759   39998 ssh_runner.go:195] Run: systemctl --version
	I0719 15:10:47.375765   39998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:10:47.394215   39998 kubeconfig.go:125] found "multinode-121443" server: "https://192.168.39.32:8443"
	I0719 15:10:47.394265   39998 api_server.go:166] Checking apiserver status ...
	I0719 15:10:47.394328   39998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:10:47.410139   39998 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0719 15:10:47.420682   39998 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 15:10:47.420736   39998 ssh_runner.go:195] Run: ls
	I0719 15:10:47.425113   39998 api_server.go:253] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I0719 15:10:47.429318   39998 api_server.go:279] https://192.168.39.32:8443/healthz returned 200:
	ok
	I0719 15:10:47.429339   39998 status.go:422] multinode-121443 apiserver status = Running (err=<nil>)
	I0719 15:10:47.429348   39998 status.go:257] multinode-121443 status: &{Name:multinode-121443 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 15:10:47.429361   39998 status.go:255] checking status of multinode-121443-m02 ...
	I0719 15:10:47.429624   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.429664   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.444511   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I0719 15:10:47.444916   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.445349   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.445384   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.445690   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.445869   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .GetState
	I0719 15:10:47.447459   39998 status.go:330] multinode-121443-m02 host status = "Running" (err=<nil>)
	I0719 15:10:47.447473   39998 host.go:66] Checking if "multinode-121443-m02" exists ...
	I0719 15:10:47.447744   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.447779   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.462373   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39589
	I0719 15:10:47.462770   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.463201   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.463217   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.463483   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.463738   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .GetIP
	I0719 15:10:47.466480   39998 main.go:141] libmachine: (multinode-121443-m02) DBG | domain multinode-121443-m02 has defined MAC address 52:54:00:4b:56:25 in network mk-multinode-121443
	I0719 15:10:47.466843   39998 main.go:141] libmachine: (multinode-121443-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:56:25", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:09:00 +0000 UTC Type:0 Mac:52:54:00:4b:56:25 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:multinode-121443-m02 Clientid:01:52:54:00:4b:56:25}
	I0719 15:10:47.466882   39998 main.go:141] libmachine: (multinode-121443-m02) DBG | domain multinode-121443-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:4b:56:25 in network mk-multinode-121443
	I0719 15:10:47.467001   39998 host.go:66] Checking if "multinode-121443-m02" exists ...
	I0719 15:10:47.467320   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.467351   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.482453   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0719 15:10:47.482833   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.483263   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.483278   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.483569   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.483756   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .DriverName
	I0719 15:10:47.483933   39998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 15:10:47.483952   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .GetSSHHostname
	I0719 15:10:47.486674   39998 main.go:141] libmachine: (multinode-121443-m02) DBG | domain multinode-121443-m02 has defined MAC address 52:54:00:4b:56:25 in network mk-multinode-121443
	I0719 15:10:47.487091   39998 main.go:141] libmachine: (multinode-121443-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:56:25", ip: ""} in network mk-multinode-121443: {Iface:virbr1 ExpiryTime:2024-07-19 16:09:00 +0000 UTC Type:0 Mac:52:54:00:4b:56:25 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:multinode-121443-m02 Clientid:01:52:54:00:4b:56:25}
	I0719 15:10:47.487118   39998 main.go:141] libmachine: (multinode-121443-m02) DBG | domain multinode-121443-m02 has defined IP address 192.168.39.226 and MAC address 52:54:00:4b:56:25 in network mk-multinode-121443
	I0719 15:10:47.487241   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .GetSSHPort
	I0719 15:10:47.487394   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .GetSSHKeyPath
	I0719 15:10:47.487546   39998 main.go:141] libmachine: (multinode-121443-m02) Calling .GetSSHUsername
	I0719 15:10:47.487657   39998 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-3847/.minikube/machines/multinode-121443-m02/id_rsa Username:docker}
	I0719 15:10:47.566520   39998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:10:47.580393   39998 status.go:257] multinode-121443-m02 status: &{Name:multinode-121443-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 15:10:47.580441   39998 status.go:255] checking status of multinode-121443-m03 ...
	I0719 15:10:47.580756   39998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 15:10:47.580791   39998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 15:10:47.595688   39998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0719 15:10:47.596076   39998 main.go:141] libmachine: () Calling .GetVersion
	I0719 15:10:47.596527   39998 main.go:141] libmachine: Using API Version  1
	I0719 15:10:47.596548   39998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 15:10:47.596826   39998 main.go:141] libmachine: () Calling .GetMachineName
	I0719 15:10:47.596995   39998 main.go:141] libmachine: (multinode-121443-m03) Calling .GetState
	I0719 15:10:47.598530   39998 status.go:330] multinode-121443-m03 host status = "Stopped" (err=<nil>)
	I0719 15:10:47.598542   39998 status.go:343] host is not running, skipping remaining checks
	I0719 15:10:47.598547   39998 status.go:257] multinode-121443-m03 status: &{Name:multinode-121443-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-121443 node start m03 -v=7 --alsologtostderr: (39.048575691s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-121443 node delete m03: (1.689850449s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (176.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-121443 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 15:19:28.744070   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-121443 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m55.984202154s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-121443 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (176.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-121443
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-121443-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-121443-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.726977ms)

                                                
                                                
-- stdout --
	* [multinode-121443-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-121443-m02' is duplicated with machine name 'multinode-121443-m02' in profile 'multinode-121443'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-121443-m03 --driver=kvm2  --container-runtime=crio
E0719 15:22:29.032275   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-121443-m03 --driver=kvm2  --container-runtime=crio: (42.376880221s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-121443
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-121443: exit status 80 (202.660273ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-121443 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-121443-m03 already exists in multinode-121443-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-121443-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.45s)

                                                
                                    
x
+
TestScheduledStopUnix (113.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-832630 --memory=2048 --driver=kvm2  --container-runtime=crio
E0719 15:29:28.744161   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-832630 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.150465451s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-832630 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-832630 -n scheduled-stop-832630
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-832630 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-832630 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-832630 -n scheduled-stop-832630
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-832630
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-832630 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-832630
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-832630: exit status 7 (64.53998ms)

                                                
                                                
-- stdout --
	scheduled-stop-832630
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-832630 -n scheduled-stop-832630
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-832630 -n scheduled-stop-832630: exit status 7 (61.558697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-832630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-832630
--- PASS: TestScheduledStopUnix (113.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1634296341 start -p running-upgrade-852424 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0719 15:32:12.081249   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1634296341 start -p running-upgrade-852424 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m58.015372421s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-852424 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-852424 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.207074627s)
helpers_test.go:175: Cleaning up "running-upgrade-852424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-852424
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-852424: (1.196395182s)
--- PASS: TestRunningBinaryUpgrade (194.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (143.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2477962193 start -p stopped-upgrade-743109 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2477962193 start -p stopped-upgrade-743109 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m34.98853809s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2477962193 -p stopped-upgrade-743109 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2477962193 -p stopped-upgrade-743109 stop: (2.118881462s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-743109 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0719 15:32:29.031946   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-743109 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.849620794s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (143.96s)

                                                
                                    
x
+
TestPause/serial/Start (106.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-464954 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-464954 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.290364544s)
--- PASS: TestPause/serial/Start (106.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-743109
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490845 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-490845 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (59.770653ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-490845] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490845 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490845 --driver=kvm2  --container-runtime=crio: (46.10043971s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-490845 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490845 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490845 --no-kubernetes --driver=kvm2  --container-runtime=crio: (5.868290623s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-490845 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-490845 status -o json: exit status 2 (230.215029ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-490845","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-490845
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-490845: (1.034833559s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-526259 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-526259 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.500265ms)

                                                
                                                
-- stdout --
	* [false-526259] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:34:04.687422   50716 out.go:291] Setting OutFile to fd 1 ...
	I0719 15:34:04.687765   50716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:34:04.687779   50716 out.go:304] Setting ErrFile to fd 2...
	I0719 15:34:04.687786   50716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 15:34:04.688074   50716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-3847/.minikube/bin
	I0719 15:34:04.688913   50716 out.go:298] Setting JSON to false
	I0719 15:34:04.690303   50716 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4591,"bootTime":1721398654,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 15:34:04.690386   50716 start.go:139] virtualization: kvm guest
	I0719 15:34:04.692969   50716 out.go:177] * [false-526259] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 15:34:04.694406   50716 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 15:34:04.694485   50716 notify.go:220] Checking for updates...
	I0719 15:34:04.696960   50716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:34:04.698389   50716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-3847/kubeconfig
	I0719 15:34:04.699757   50716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-3847/.minikube
	I0719 15:34:04.701018   50716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 15:34:04.702188   50716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:34:04.704028   50716 config.go:182] Loaded profile config "NoKubernetes-490845": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0719 15:34:04.704201   50716 config.go:182] Loaded profile config "kubernetes-upgrade-574044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 15:34:04.704327   50716 config.go:182] Loaded profile config "pause-464954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 15:34:04.704452   50716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 15:34:04.739781   50716 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 15:34:04.741135   50716 start.go:297] selected driver: kvm2
	I0719 15:34:04.741159   50716 start.go:901] validating driver "kvm2" against <nil>
	I0719 15:34:04.741172   50716 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:34:04.743434   50716 out.go:177] 
	W0719 15:34:04.744634   50716 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0719 15:34:04.745948   50716 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-526259 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-526259" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:34:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.42:8443
name: NoKubernetes-490845
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.48:8443
name: pause-464954
contexts:
- context:
cluster: NoKubernetes-490845
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:34:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: NoKubernetes-490845
name: NoKubernetes-490845
- context:
cluster: pause-464954
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-464954
name: pause-464954
current-context: NoKubernetes-490845
kind: Config
preferences: {}
users:
- name: NoKubernetes-490845
user:
client-certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/NoKubernetes-490845/client.crt
client-key: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/NoKubernetes-490845/client.key
- name: pause-464954
user:
client-certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.crt
client-key: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-526259

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-526259"

                                                
                                                
----------------------- debugLogs end: false-526259 [took: 2.61669513s] --------------------------------
helpers_test.go:175: Cleaning up "false-526259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-526259
--- PASS: TestNetworkPlugins/group/false (2.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490845 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490845 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.748493443s)
--- PASS: TestNoKubernetes/serial/Start (28.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-490845 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-490845 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.876517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-490845
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-490845: (1.292077897s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-490845 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-490845 --driver=kvm2  --container-runtime=crio: (46.885020252s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-490845 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-490845 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.318572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (136.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-382231 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0719 15:37:29.032449   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-382231 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (2m16.117488224s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (136.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (128.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-817144 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 15:39:28.744385   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-817144 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (2m8.889373944s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (128.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-817144 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [796e5718-64e1-485b-b2eb-849dc0e300a3] Pending
helpers_test.go:344: "busybox" [796e5718-64e1-485b-b2eb-849dc0e300a3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [796e5718-64e1-485b-b2eb-849dc0e300a3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004432145s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-817144 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-382231 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bd789cfa-0bb0-4b27-8a0f-a64eef075dcd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bd789cfa-0bb0-4b27-8a0f-a64eef075dcd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005535105s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-382231 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-817144 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-817144 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-382231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-382231 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-601445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 15:40:51.796551   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-601445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m23.452483357s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [101e74e5-8412-4a68-a1f7-723678a7324e] Pending
helpers_test.go:344: "busybox" [101e74e5-8412-4a68-a1f7-723678a7324e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [101e74e5-8412-4a68-a1f7-723678a7324e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006465278s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-601445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-601445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (635.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-817144 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-817144 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m35.322354906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-817144 -n embed-certs-817144
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (635.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (617.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-382231 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0719 15:42:29.032594   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-382231 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (10m17.602740033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-382231 -n no-preload-382231
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (617.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-862924 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-862924 --alsologtostderr -v=3: (4.284493109s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-862924 -n old-k8s-version-862924: exit status 7 (63.713574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-862924 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (483.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-601445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0719 15:47:29.031689   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 15:48:52.081591   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
E0719 15:49:28.744557   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/addons-018825/client.crt: no such file or directory
E0719 15:52:29.031980   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/functional-814991/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-601445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m3.34256158s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-601445 -n default-k8s-diff-port-601445
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (483.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-850417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-850417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (48.621275766s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-850417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-850417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.116952247s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-850417 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-850417 --alsologtostderr -v=3: (7.366068804s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m34.637119669s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-850417 -n newest-cni-850417
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-850417 -n newest-cni-850417: exit status 7 (71.644554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-850417 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (78.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-850417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-850417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m18.271885572s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-850417 -n newest-cni-850417
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (78.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (120.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m0.248657631s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (120.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-850417 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-850417 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-850417 --alsologtostderr -v=1: (2.068080556s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-850417 -n newest-cni-850417
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-850417 -n newest-cni-850417: exit status 2 (270.227435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-850417 -n newest-cni-850417
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-850417 -n newest-cni-850417: exit status 2 (268.041396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-850417 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-850417 --alsologtostderr -v=1: (1.408357131s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-850417 -n newest-cni-850417
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-850417 -n newest-cni-850417
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m25.485419294s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5fsbl" [9e8eab47-7c47-4022-9bb6-c84f21f49910] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5fsbl" [9e8eab47-7c47-4022-9bb6-c84f21f49910] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004957083s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m41.155569299s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2fk55" [00be2c65-0402-4f68-83e7-c220be542848] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006757408s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nvxck" [0ae6bb12-cb56-477d-9ed0-b3e6ccf526d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 16:09:42.999512   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.004811   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.015061   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.035366   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.075693   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.156027   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.316449   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:43.636830   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:44.277203   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:45.557686   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-nvxck" [0ae6bb12-cb56-477d-9ed0-b3e6ccf526d0] Running
E0719 16:09:48.117883   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:09:53.239026   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003403301s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0719 16:10:23.960204   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m1.800119542s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jw57n" [0f1ef8f9-dfff-4402-8335-370df5576b8c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004656304s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gvdpp" [3907f5ed-64bb-4821-9f94-26c7577e9673] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gvdpp" [3907f5ed-64bb-4821-9f94-26c7577e9673] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003989616s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m30.935408331s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (93.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0719 16:11:04.921411   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/no-preload-382231/client.crt: no such file or directory
E0719 16:11:08.330793   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-526259 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m33.661118658s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (93.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k92fd" [cd04021d-b888-4b3e-a563-9d47b99ace19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k92fd" [cd04021d-b888-4b3e-a563-9d47b99ace19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003757978s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-z76t7" [ae9620cb-781b-4f5a-a170-cccb2554bf9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-z76t7" [ae9620cb-781b-4f5a-a170-cccb2554bf9e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005462409s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jdr8x" [59f842ce-541b-41e1-a59d-1172b1b538ad] Running
E0719 16:12:09.771644   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/old-k8s-version-862924/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005085862s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w5wvq" [b1fd80b9-3a77-4346-b61f-eb8360611046] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0719 16:12:18.273837   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-w5wvq" [b1fd80b9-3a77-4346-b61f-eb8360611046] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004682307s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-526259 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-526259 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b4hp2" [48a8827b-e00c-41c6-bd43-7d13c47da415] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b4hp2" [48a8827b-e00c-41c6-bd43-7d13c47da415] Running
E0719 16:12:38.754676   11012 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/default-k8s-diff-port-601445/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004914744s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-526259 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-526259 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
266 TestStartStop/group/disable-driver-mounts 0.13
276 TestNetworkPlugins/group/kubenet 2.89
285 TestNetworkPlugins/group/cilium 3.13
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-885817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-885817
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-526259 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-526259" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:34:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.50.42:8443
name: NoKubernetes-490845
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.48:8443
name: pause-464954
contexts:
- context:
cluster: NoKubernetes-490845
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:34:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: NoKubernetes-490845
name: NoKubernetes-490845
- context:
cluster: pause-464954
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-464954
name: pause-464954
current-context: NoKubernetes-490845
kind: Config
preferences: {}
users:
- name: NoKubernetes-490845
user:
client-certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/NoKubernetes-490845/client.crt
client-key: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/NoKubernetes-490845/client.key
- name: pause-464954
user:
client-certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.crt
client-key: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-526259

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-526259"

                                                
                                                
----------------------- debugLogs end: kubenet-526259 [took: 2.735028339s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-526259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-526259
--- SKIP: TestNetworkPlugins/group/kubenet (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-526259 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-526259" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19302-3847/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.83.48:8443
name: pause-464954
contexts:
- context:
cluster: pause-464954
extensions:
- extension:
last-update: Fri, 19 Jul 2024 15:33:50 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-464954
name: pause-464954
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-464954
user:
client-certificate: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.crt
client-key: /home/jenkins/minikube-integration/19302-3847/.minikube/profiles/pause-464954/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-526259

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-526259" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-526259"

                                                
                                                
----------------------- debugLogs end: cilium-526259 [took: 2.997460707s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-526259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-526259
--- SKIP: TestNetworkPlugins/group/cilium (3.13s)

                                                
                                    
Copied to clipboard